"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:6d3dd4f1-0f24-4316-ab7c-57190e516042","http://resolver.tudelft.nl/uuid:6d3dd4f1-0f24-4316-ab7c-57190e516042","Optimal Ship Fuel Selection under Life Cycle Uncertainty","Zwaginga, J.J. (TU Delft Ship Design, Production and Operations); Lagemann, Benjamin (Norwegian University of Science and Technology (NTNU); SINTEF Ocean); Erikstad, Stein Ove (Norwegian University of Science and Technology (NTNU)); Pruyn, J.F.J. (TU Delft Ship Design, Production and Operations; Rotterdam University of Applied Sciences)","","2024","Shipowners need to prepare for low-emission fuel alternatives to meet the IMO 2050 goals. This is a complex problem due to conflicting objectives and a high degree of uncertainty. To help navigate this problem, this paper investigates how methods that take uncertainty into account, like robust optimization and stochastic optimization, could be used to address uncertainty while taking into account multiple objectives. Robust optimization incorporates uncertainty using a scalable measure of conservativeness, while stochastic programming adds an expected value to the objective function that represents uncertain scenarios. The methods are compared by applying them to the same dataset for a Supramax bulk carrier and taking fuel prices and market-based measures as uncertain factors. It is found that both offer important insights into the impact of uncertainty, which is an improvement when compared to deterministic optimization, that does not take uncertainty into account. From a practical standpoint, both methods show that methanol and LNG ships allow a cheap but large reduction in emissions through the use of biofuels. More importantly, even though there are limitations due to the parameter range assumptions, ignoring uncertainty with respect to future fuels is worse as a starting point for discussions.","ship design; alternative fuel; energy system selection; uncertainty; optimization; robust; stochastic","en","journal article","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:421e00ec-e147-41b0-9c3b-2a14cf41c4d6","http://resolver.tudelft.nl/uuid:421e00ec-e147-41b0-9c3b-2a14cf41c4d6","Linear Time-Varying Parameter Estimation: Maximum A Posteriori Approach via Semidefinite Programming","Vakili, S. (TU Delft Team Manuel Mazo Jr); Khosravi, M. (TU Delft Team Khosravi); Mohajerin Esfahani, P. (TU Delft Team Peyman Mohajerin Esfahani); Mazo, M. (TU Delft Team Manuel Mazo Jr)","","2024","We study the problem of identifying a linear time-varying output map from measurements and linear time-varying system states, which are perturbed with Gaussian observation noise and process uncertainty, respectively. Employing a stochastic model as prior knowledge for the parameters of the unknown output map, we reconstruct their estimates from input/output pairs via a Bayesian approach to optimize the posterior probability density of the output map parameters. The resulting problem is a non-convex optimization, for which we propose a tractable linear matrix inequalities approximation to warm-start a first-order subsequent method. The efficacy of our algorithm is shown experimentally against classical Expectation Maximization and Dual Kalman Smoother approaches.","Estimation; identification; linear matrix inequalities; optimization; semidefinite programming","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-25","","","Team Manuel Mazo Jr","","",""
"uuid:af383529-1f15-463d-b18b-0ec3e1ed328a","http://resolver.tudelft.nl/uuid:af383529-1f15-463d-b18b-0ec3e1ed328a","Scheduling methods for modular shipbuilding","van der Beek, T. (TU Delft Ship Design, Production and Operations)","Aardal, K.I. (promotor); van Essen, J.T. (copromotor); Pruyn, J.F.J. (copromotor); Delft University of Technology (degree granting institution)","2023","In the field of shipbuilding, there is a growing demand for faster and more efficient production processes, along with a need for swift adoption of new technologies. Modular production emerges as a potential solution, involving the development of a product family with a base platform and various modules. Instead of designing and producing each product individually, modular production allows for the combination of modules to create diverse products. Despite the recognized potential of this approach, there is a lack of quantitative results, and scheduling challenges in modular shipbuilding need to be addressed for its successful implementation.
This dissertation focuses on identifying and resolving three key challenges related to scheduling in modular production. The first challenge revolves around the definition and utilization of modules. Factors such as resource requirements, project sequencing influenced by module size, and project-specific variations in module usage are crucial considerations. The second challenge pertains to inventory management, where reduced production time increases the impact of long lead times, and standardized components spread inventory costs across multiple projects. The third challenge involves stochastic scheduling, leveraging the structural similarities among products in a modular production system to optimize schedules for future projects.
To address these challenges, the dissertation explores the Resource Constrained Project Scheduling Problem with a flexible Project Structure (RCPSP-PS). It introduces a Mixed Integer Linear Programming (MILP) model and a solution method, demonstrating its superiority over existing methods. Given the NP-hardness of the problem, heuristic methods, including group graphs, hybrid differential evolution, and ant colony optimization algorithms, are proposed to quickly find feasible solutions.
The scope expands to the production of a product family through the Resource Constrained Project Scheduling Problem with Modular construction and new Project arrivals (RCPSPMP). This extended problem incorporates stochastic project arrivals and inventory allocation, modeling the pre-assembly of modules. A Progressive Hedging (PH) algorithm is introduced to consider future project arrivals, ultimately aiming to create a profitable product family rather than individual products.
Finally, stochastic project arrivals are considered for the standard Resource Constrained Project Scheduling Problem (RCPSP). Simulation optimization is initially employed, but a data-assisted method using neural networks is introduced to significantly reduce computational costs while maintaining solution quality.
In conclusion, this dissertation presents comprehensive methods for scheduling in modular shipbuilding, addressing challenges related to flexible project structures, nonrenewable resources, resource allocation, and stochastic project arrivals. The versatility of these methods extends their applicability beyond shipbuilding to various industries.","Modular shipbuilding; Project scheduling; Resource constrained project scheduling problem; optimization","en","doctoral thesis","","978-94-6473-319-8","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:d40a316e-ba81-4e3f-b644-9275284c1fb9","http://resolver.tudelft.nl/uuid:d40a316e-ba81-4e3f-b644-9275284c1fb9","Quantum computer-assisted global optimization in geophysics illustrated with stack-power maximization for refraction residual statics estimation","Dukalski, Marcin (Aramco Global Research Center Delft); Rovetta, Diego (Aramco Global Research Center Delft); van de Linde, Stan (Student TU Delft; TNO); Möller, M. (TU Delft Numerical Analysis); Neumann, Niels (TNO); Phillipson, Frank (TNO; Universiteit Maastricht)","","2023","Much of recent progress in geophysics can be attributed to the adaptation of heterogeneous high-performance computing architectures. It is projected that the next major leap in many areas of science, and hence hopefully in geophysics too, will be due to the emergence of quantum computers. Finding a right combination of hardware, algorithms, and a use case, however, proves to be a very challenging task - especially when looking for a relevant application that scales efficiently on a quantum computer and is difficult to solve using classical means. We find that maximizing stack power for residual statics correction, an NP-hard combinatorial optimization problem, appears to naturally fit a particular type of quantum computing known as quantum annealing. We express the underlying objective function as a quadratic unconstrained binary optimization, which is a quantum-native formulation of the problem. We choose some solution space and define a proper encoding to translate the problem variables into qubit states. We find that these choices can have a significant impact on the maximum problem size that can fit on the quantum annealer and on the fidelity of the final result. To improve the latter, we embed the quantum optimization step in a hybrid classical-quantum workflow, which aims to increase the frequency of finding the global, rather than some local, optimum of the objective function. Finally, we find that a generic, black-box, hybrid classical-quantum solver also could be used to solve stack-power maximization problems proximal to industrial relevance and capable of surpassing deterministic solvers prone to cycle skipping. A custom-built workflow capable of solving larger problems with an even higher robustness and greater control of the user appears to be within reach in the very near future.","global search; imaging; near surface; optimization; statics","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-09-01","","","Numerical Analysis","","",""
"uuid:da5f35cc-aea9-470a-b026-b1f6c91a2583","http://resolver.tudelft.nl/uuid:da5f35cc-aea9-470a-b026-b1f6c91a2583","Seismic acquisition design based on full-wavefield migration","Revelo Obando, B.A. (TU Delft Applied Geophysics and Petrophysics); Blacquière, G. (TU Delft Applied Geophysics and Petrophysics)","","2023","The ultimate goal in survey design is to obtain the acquisition parameters that enable acquiring the most affordable data that fulfills certain image quality requirements. We propose a method that allows optimization of the receiver geometry for a fixed source distribution. The former is parameterized with a receiver density function that determines the number of receivers per unit area. We optimize this receiverdensity function through an iterative gradient descent scheme that minimizes the difference between the image obtained with the current acquisition geometry and a reference image. The reference image is obtained from prior subsurface information that is assumed to be available. We tested the method with different subsurface models. The results show that the acquisition geometry is optimized according to the complexity of each subsurface model. The receivers are moved towards the areas where more data is needed for obtaining better imaging.","acquisition; imaging; optimization; survey design","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-08-08","","","Applied Geophysics and Petrophysics","","",""
"uuid:6a8c3773-aa7f-4042-859d-7d73175e2b98","http://resolver.tudelft.nl/uuid:6a8c3773-aa7f-4042-859d-7d73175e2b98","Stackelberg evolutionary game theory: how to manage evolving systems","Stein, Alexander (Queen Mary University of London); Salvioli, M. (TU Delft Transport and Logistics); Garjani, Hasti (TU Delft Mathematical Physics); Dubbeldam, J.L.A. (TU Delft Mathematical Physics); Viossat, Yannick (Université Paris-Dauphine); Brown, Joel S. (Lee Moffitt Cancer Center and Research Institute); Staňková, K. (TU Delft Transport and Logistics)","","2023","Stackelberg evolutionary game (SEG) theory combines classical and evolutionary game theory to frame interactions between a rational leader and evolving followers. In some of these interactions, the leader wants to preserve the evolving system (e.g. fisheries management), while in others, they try to drive the system to extinction (e.g. pest control). Often the worst strategy for the leader is to adopt a constant aggressive strategy (e.g. overfishing in fisheries management or maximum tolerable dose in cancer treatment). Taking into account the ecological dynamics typically leads to better outcomes for the leader and corresponds to the Nash equilibria in game-theoretic terms. However, the leader's most profitable strategy is to anticipate and steer the eco-evolutionary dynamics, leading to the Stackelberg equilibrium of the game. We show how our results have the potential to help in fields where humans try to bring an evolutionary system into the desired outcome, such as, among others, fisheries management, pest management and cancer treatment. Finally, we discuss limitations and opportunities for applying SEGs to improve the management of evolving biological systems. This article is part of the theme issue 'Half a century of evolutionary games: a synthesis of theory, application and future directions'.","evolutionary game theory; Darwinian dynamics; cancer evolution; isheries management; optimization; evolutionary rescue","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:535391af-58a0-4882-b2bb-44c21240b22d","http://resolver.tudelft.nl/uuid:535391af-58a0-4882-b2bb-44c21240b22d","Does enforcing glenohumeral joint stability matter?: A new rapid muscle redundancy solver highlights the importance of non-superficial shoulder muscles","Belli, I. (TU Delft Human-Robot Interaction); Joshi, S.D. (TU Delft Learning & Autonomous Control); Prendergast, J.M. (TU Delft Human-Robot Interaction); Beck, I.L.Y. (TU Delft Human-Robot Interaction); Della Santina, C. (TU Delft Learning & Autonomous Control; Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)); Peternel, L. (TU Delft Human-Robot Interaction); Seth, A. (TU Delft Biomechatronics & Human-Machine Control)","","2023","The complexity of the human shoulder girdle enables the large mobility of the upper extremity, but also introduces instability of the glenohumeral (GH) joint. Shoulder movements are generated by coordinating large superficial and deeper stabilizing muscles spanning numerous degrees-of-freedom. How shoulder muscles are coordinated to stabilize the movement of the GH joint remains widely unknown. Musculoskeletal simulations are powerful tools to gain insights into the actions of individual muscles and particularly of those that are difficult to measure. In this study, we analyze how enforcement of GH joint stability in a musculoskeletal model affects the estimates of individual muscle activity during shoulder movements. To estimate both muscle activity and GH stability from recorded shoulder movements, we developed a Rapid Muscle Redundancy (RMR) solver to include constraints on joint reaction forces (JRFs) from a musculoskeletal model. The RMR solver yields muscle activations and joint forces by minimizing the weighted sum of squared-activations, while matching experimental motion. We implemented three new features: first, computed muscle forces include active and passive fiber contributions; second, muscle activation rates are enforced to be physiological, and third, JRFs are efficiently formulated as linear functions of activations. Muscle activity from the RMR solver without GH stability was not different from the computed muscle control (CMC) algorithm and electromyography of superficial muscles. The efficiency of the solver enabled us to test over 3600 trials sampled within the uncertainty of the experimental movements to test the differences in muscle activity with and without GH joint stability enforced. We found that enforcing GH stability significantly increases the estimated activity of the rotator cuff muscles but not of most superficial muscles. Therefore, a comparison of shoulder model muscle activity to EMG measurements of superficial muscles alone is insufficient to validate the activity of rotator cuff muscles estimated from musculoskeletal models.","optimization; muscle redundancy; musculoskeletal modeling; shoulder","en","journal article","","","","","","","","","","","Human-Robot Interaction","","",""
"uuid:8fe1138a-34aa-4a61-a108-68ead2e1257b","http://resolver.tudelft.nl/uuid:8fe1138a-34aa-4a61-a108-68ead2e1257b","Population games with replicator dynamics under event-triggered payoff provider and a demand response application","Martinez-Piazuelo, Juan (Universitat Politecnica de Catalunya); Ananduta, W. (TU Delft Team Sergio Grammatico); Ocampo-Martinez, Carlos (Universitat Politecnica de Catalunya); Grammatico, S. (TU Delft Team Sergio Grammatico; TU Delft Team Bart De Schutter); Quijano, Nicanor (Universidad de los Andes)","","2023","We consider a large population of decision makers that choose their evolutionary strategies based on simple pairwise imitation rules. We describe such a dynamic process by the replicator dynamics. Differently from the available literature, where the payoffs signals are assumed to be updated continuously, we consider a more realistic scenario where they are updated occasionally. Our main technical contribution is to devise two event-triggered communication schemes with asymptotic convergence guarantees to a Nash equilibrium. Finally, we show how our proposed approach is applicable as an efficient distributed demand response mechanism.","Asymptotic stability; Costs; Demand response; Event-triggered control; game theory; Games; optimization; Protocols; Sociology; Statistics","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-12","","","Team Sergio Grammatico","","",""
"uuid:c9e067c3-5212-4dbb-ac61-7dabe1f22c7c","http://resolver.tudelft.nl/uuid:c9e067c3-5212-4dbb-ac61-7dabe1f22c7c","Distributed Nonlinear Trajectory Optimization for Multi-Robot Motion Planning","Ferranti, L. (TU Delft Learning & Autonomous Control); Lyons, L. (TU Delft Learning & Autonomous Control); Negenborn, R.R. (TU Delft Transport Engineering and Logistics); Keviczky, T. (TU Delft Team Tamas Keviczky); Alonso-Mora, J. (TU Delft Learning & Autonomous Control)","","2023","This work presents a method for multi-robot coordination based on a novel distributed nonlinear model predictive control (NMPC) formulation for trajectory optimization and its modified version to mitigate the effects of packet losses and delays in the communication among the robots. Our algorithms consider that each robot is equipped with an onboard computation unit to solve a local control problem and communicate with neighboring autonomous robots via a wireless network. The difference between the two proposed methods is in the way the robots exchange information to coordinate. The information exchange can occur in a following: 1) synchronous or 2) asynchronous fashion. By relying on the theory of the nonconvex alternating direction method of multipliers (ADMM), we show that the proposed solutions converge to a (local) solution of the centralized problem. For both algorithms, the communication exchange preserves the safety of the robots; that is, collisions with neighboring autonomous robots are prevented. The proposed approaches can be applied to various multi-robot scenarios and robot models. In this work, we assess our methods, both in simulation and with experiments, for the coordination of a team of autonomous vehicles in the following: 1) an unsupervised intersection crossing and 2) the platooning scenarios.","Collision avoidance; Delays; fault-tolerant control; multi-robot systems; optimal control; optimization; Packet loss; Planning; Robot kinematics; Robots; Trajectory optimization","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-17","","","Learning & Autonomous Control","","",""
"uuid:ee440cf4-e8ba-4554-9f7d-d3c5ebeb62f0","http://resolver.tudelft.nl/uuid:ee440cf4-e8ba-4554-9f7d-d3c5ebeb62f0","Lead-time-based freight routing in multi-modal networks considering the Physical Internet","Shahedi, Alireza (University of Genoa); Gallo, Federico (University of Genoa); Saeednia, M. (TU Delft Transport and Planning); Sacco, Nicola (University of Genoa)","","2023","This paper addresses the problem of optimizing the transport of goods in the Physical Internet (PI) framework in a multi-modal setting using a multi-objective mixed-integer linear programming (MILP) approach. The model is specifically designed to meet the requirements related to modular shipments and PI-hubs, and in particular, determines the allocation of modular shipments to each transport mode in an intermodal setting. In doing so, parallel direct connection via road, the delivery times and the transportation costs are minimized. The model is applied to a numerical case study, to test its effectiveness to enhance freight transport efficiency within the PI framework, by exploiting, in particular, all the capacities of the available vehicles. In addition, a sensitivity analysis is conducted on some model parameters, to test its reaction to changes in the supply system and in the objective priorities. Results show that all the shipments are effectively transported between the origin and the destination terminals, they are divided into modules when necessary, and the selected transport modes, allocation strategy, and delivery times vary accordingly to the objective priorities.","freight transport; physical internet; optimization","en","journal article","","","","","","","","","","","Transport and Planning","","",""
"uuid:05598ff0-39d9-4dc5-93bf-6892ce1623a6","http://resolver.tudelft.nl/uuid:05598ff0-39d9-4dc5-93bf-6892ce1623a6","Parametric Curve Comparison for Modeling Floating Offshore Wind Turbine Substructures","Ojo, Adebayo (University of Strathclyde); Collu, Maurizio (University of Strathclyde); Coraddu, A. (TU Delft Ship Design, Production and Operations)","","2023","The drive for the cost reduction of floating offshore wind turbine (FOWT) systems to the levels of fixed bottom foundation turbine systems can be achieved with creative design and analysis techniques of the platform with free-form curves to save numerical simulation time and minimize the mass of steel (cost of steel) required for design. This study aims to compare four parametric free-form curves (cubic spline, B-spline, Non-Uniform Rational B-Spline and cubic Hermite spline) within a design and optimization framework using the pattern search gradient free optimization algorithm to explore and select an optimal design from the design space. The best performance free-form curve within the framework is determined using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The TOPSIS technique shows the B-spline curve as the best performing free-form curve based on the selection criteria, amongst which are design and analysis computational time, estimated mass of platform and local shape control properties. This study shows that free-form curves like B-spline can be used to expedite the design, analysis and optimization of floating platforms and potentially advance the technology beyond the current level of fixed bottom foundations.","design; FOWT; optimization; parametric free-form; TOPSIS","en","journal article","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:54e17433-f8ca-4736-8572-84b626837643","http://resolver.tudelft.nl/uuid:54e17433-f8ca-4736-8572-84b626837643","Evaluation of centralized/decentralized configuration schemes of CO2 electrochemical reduction-based supply chains","Wiltink, T.J. (TU Delft Energie and Industrie); Yska, Stijn (Student TU Delft); Ramirez, Andrea (TU Delft ChemE/Chemical Engineering); Pérez-Fortes, Mar (TU Delft Energie and Industrie)","","2023","Electrochemical reduction of CO2 (CO2ER) is an emerging technology with the potential to limit the use of fossil-based feedstocks in the petrochemical industry by converting CO2 and renewable electricity into useful products such as syngas. Its successful deployment will depend not only on the technology's performance but also on its integration into the supply chain. In this work, a facility location model is used to gain insights regarding the capacity of CO2ER plants that produce syngas and the implications for the central/decentral placement of these CO2-based syngas plants. Different optimal configurations are examined in the model by changing the syngas transport costs. In this exploratory case, the results indicate that centralization is only an option when the syngas and CO2 transport costs are similar. When syngas transport is more expensive, decentralizing CO2-based syngas plants in the supply chain appears more feasible.","CO electrochemical reduction; CO utilization; optimization; supply chain configurations; supply chain modeling","en","book chapter","Elservier","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-18","","ChemE/Chemical Engineering","Energie and Industrie","","",""
"uuid:570ec2b3-dfd1-4136-ba35-7daf5e8a42fd","http://resolver.tudelft.nl/uuid:570ec2b3-dfd1-4136-ba35-7daf5e8a42fd","HAPPy to Control: A Heuristic And Predictive Policy to Control Large Urban Drainage Systems","van der Werf, Job (TU Delft Sanitary Engineering); Kapelan, Z. (TU Delft Sanitary Engineering); Langeveld, J.G. (TU Delft Sanitary Engineering; Partners4UrbanWater)","","2023","Model Predictive Control (MPC) of Urban Drainage Systems (UDS) has been established as a cost-effective method to reduce pollution. However, the operation of large UDS (containing over 20 actuators) can only be optimized by oversimplifying the UDS dynamics, potentially leading to a decrease in performance and reduction in users' trust, thus inhibiting widespread implementation of MPC procedures. A Heuristic And Predictive Policy (HAPPy) was set up, relying on the dynamic selection of the actuators with the highest impact on the UDS functioning and optimizing those in real-time. The remaining actuators follow a pre-set heuristic procedure. The HAPPy procedure was applied to two separate UDS in Rotterdam with the control objective being the minimization of overflow volume in each of the two cases. Results obtained show that the level of impact of the actuators on the UDS functioning changes during an event and can be predicted using a Random Forest algorithm. These predictions can be used to provide near-global optimal actuator settings resulting in the performance of the HAPPy procedure that is comparable to a full-MPC control and outperforming heuristic control procedures. The number of actuators selected to obtain near-global optimal settings depends on the UDS and rainfall characteristics showing an asymptotic real-time control (RTC) performance as the number of actuators increases. The HAPPy procedure showed different RTC dynamics for medium and large rainfall events, with the former showing a higher level of controllability than the latter. For medium events, a relatively small number of actuators suffices to achieve the potential performance improvement.","combined sewer overflows; model predictive control; optimization; real time control; urban drainage systems","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:01a3ced0-59bb-484a-beeb-0bfbe4d904b8","http://resolver.tudelft.nl/uuid:01a3ced0-59bb-484a-beeb-0bfbe4d904b8","Incorporating institutions into optimization-based energy system models","Wang, N. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Verzijlbergh, R.A. (copromotor); Heijnen, P.W. (copromotor); Delft University of Technology (degree granting institution)","2022","The pledge for a carbon-free energy system in 2050 requires significant investments into renewable energy sources (RES). The relevant questions are: what technologies to select, where to build them, how much the capacities are, and at what cost. In order to answer these techno-economic questions, optimization models are commonly used to sketch a least-cost future energy system. However, the energy system is far more complex than a mathematical model. Although optimization models can provide the least-cost system design, they do not guarantee that we can realize this design because some key aspects are not captured by such models: the impact of public acceptance issues, conflicting interests among stakeholders, and the imperfection of markets. These non-technical aspects are generalized as institutions in this thesis. In a socio- technical system like the energy system, considering both the social aspects, the institutions, and the technical system, is pivotal. Therefore, the goal of this thesis is to improve optimization models by including institutions in energy system planning.
Since institutions are not commonly mentioned in energy system planning models, this thesis starts with standardizing institutions, and we conducted a literature review. The goal is to provide a common ground for discussing institutions and find research trends and gaps in the state-of-the-art. We identified the following research gaps that need deliberate attention: spatial policies, collective decision-making, and bilateral trading with externalities. In this thesis, we developed three models to deal with these institutions. Since these institutions are indispensable in a socio-technical system, including them in optimization models results in socio-technically optimal future energy system designs beyond only the techno-economic optimums.","socio-technical systems; optimization; energy system planning; institutions; spatial policies; energy system optimization models; multi-objective optimization; multi-criteria decision-making; bilateral trading; externalities","en","doctoral thesis","","978-94-6366-630-5","","","","","","","","","Energie and Industrie","","",""
"uuid:cdca9bf1-3e6b-4bfc-9d9d-b5acdd3f900d","http://resolver.tudelft.nl/uuid:cdca9bf1-3e6b-4bfc-9d9d-b5acdd3f900d","Generalized Models of Sequential Decision-Making under Uncertainty","Neustroev, G. (TU Delft Algorithmics)","de Weerdt, M.M. (promotor); Verzijlbergh, R.A. (copromotor); Delft University of Technology (degree granting institution)","2022","Sequential decision-making under uncertainty is an important branch of artificial intelligence research with a plethora of real-life applications. In this thesis, we generalize two fundamental properties of the decision-making process. First, we show that the theory on planning methods for finite spaces can be extended to infinite but countable spaces. Second, we propose a unified model of reinforcement learning algorithms that employ the principle of optimism in the face of uncertainty. This model is used to explain why these methods are efficient. We use the developed theory to design novel algorithms. Depending on the user's needs, these algorithms can either automate the decision-making process completely, or provide advice in decision-support systems.
We start with presenting the basic concepts from the theory of decision-making and discuss the two approaches to it: planning and reinforcement learning. We look at a few typical sequential decision-making problems of increasing difficulty. In particular, we present a game that involves grid navigation and the problems of warehouse management and wind farm operation. Next, we survey the state-of-the-art methods for solving such problems.
Based on this analysis, we identify the following research opportunities. In planning, models with non-stationary and countably-infinite data remain relatively untreated because they are equivalent to infinitely-dimensional optimization problems, which are notoriously difficult to solve even approximately. In reinforcement learning, optimistic approaches lead to computational efficiency, yet the theory of optimism remains undeveloped. Moreover, while reinforcement learning shines at playing games, such as chess, shōgi, Go, and StarCraft II, its practical applications remain few.
Next, we overview a mathematical framework of sequential decision-making under uncertainty known as the Markov decision process. We explain how the goal of the decision-maker can be expressed as an optimization problem and present two approaches to achieving this goal. The first—more common—approach assigns so-called values to different actions. The other approach uses so-called occupancies that tell how often the agent should choose the actions instead of evaluating how good these actions are. In fact, the two approaches are known to be dual to each other. While this duality is well studied in the finite case, the infinite case is less explored. To address this knowledge gap, we present a new dual formulation for countable problems, both finite and infinite.
Afterwards, we use the dual formulation to design a new planning algorithm for infinite-horizon problems with non-stationary data. These problems are essentially infinite-dimensional optimization problems and as such are impossible to solve exactly using the standard approaches. We show that they can be solved by changing what is defined as optimal behavior: instead of seeking universally optimal policies, we consider initial-decision-optimal ones. Instead of planning all of the actions beforehand, these policies can be used to plan given the currently observed data. When the next decision is required, the process can be repeated in the same manner, leading to an optimal decision-making strategy. Our approach uses the occupancy-value duality to rule out suboptimal actions based on so-called truncations: finite-time approximations of the infinite-horizon decision-making problem.
We extend the truncation approach to a more general setting of decision-making problems with countably-infinite state spaces. Instead of time-based truncations, we consider state-based ones. This allows us to limit the amount of data required to make the decisions and to design an algorithm for a class of problems that are otherwise unsolvable to optimality. This approach belongs to a family of methods called policy iteration: starting from an initial policy, it constructs a series of improvements in the decisions while ruling out choices that are provably suboptimal.
After that, we turn to reinforcement learning. For a long time, the only provably efficient reinforcement-learning methods were model-based ones; recently, a family of model-free optimistic methods emerged, each of them accompanied by an analysis of how sample-efficient the method is. We, too, study optimistic reinforcement learning, but in contrast to the existing research, we seek to understand not how efficient it is, but why it is efficient. Our analysis results in a formula that explains the three factors that cause regret—the efficiency loss—in optimistic reinforcement learning: the problem size, the measure of exploration, and the estimation error caused by the mismatch between the realized transitions and their true distribution. It can be applied to all of the existing algorithms as well as new ones. We design one such new algorithm and show how our theoretical framework can facilitate the proof of its efficiency.
Finally, we consider a high-impact real-world sequential decision-making problem known as active wake control. Wind turbines can negatively impact each other with their wakes. These wake-induced losses can be reduced by changing the turbine orientations. Unfortunately, the optimal control strategy is non-trivial. To address this, existing approaches use simplified wake models in combination with numerical optimization methods; instead we propose to use model-free reinforcement learning. As a first step towards this goal, we present a wind farm simulator that is suitable for reinforcement learning and better reflects the realities of wind farm operation than other existing tools. Using this simulator, we show that previous research used a suboptimal action representation in this problem; we identify two alternatives, both of which improve the learning efficiency. Additionally, we demonstrate that reinforcement learning is robust to errors in the observations, providing further evidence that it is a fitting approach to active wake control.
Our contributions advance the state of the art in the theory of sequential decision-making under uncertainty and its applications. These advances hint at unexplored connections between countably-infinite planning and optimistic learning, which may lead to even more efficient algorithms for sequential decision-making under uncertainty in the future.","sequential decision-making under uncertainty; optimization; Markov decision processes; planning; linear programming; duality; reinforcement learning; optimistic learning","en","doctoral thesis","","978-94-6366-624-4","","","","","","","","","Algorithmics","","",""
"uuid:877bed45-d775-40bb-bde2-d2322cb334f0","http://resolver.tudelft.nl/uuid:877bed45-d775-40bb-bde2-d2322cb334f0","Decisions on life-cycle reliability of flood defence systems","Klerk, W.J. (TU Delft Hydraulic Structures and Flood Risk)","Delft University of Technology (degree granting institution)","2022","Many countries rely on flood defence systems to prevent economic damage and loss-of-life due to catastrophic floods. Asset managers of flood defence systems need to cope with the consequences of structural degradation, and changing societal and environmental conditions, in order to satisfy performance requirements and optimize societal value of flood defence assets. This is a continuous effort of planning, executing and evaluating a variety of different system interventions. These can be aimed at both reducing the uncertainty on (e.g., inspection or monitoring), or improving the performance of a flood defence system (e.g., reinforcement). Performance is typically expressed as the reliability on a system level, which in this thesis is interpreted as the life-cycle reliability: the estimated reliability with all foreseen interventions in time. The key objective of this thesis is to improve decisions on life-cycle reliability of flood defence systems. This is elaborated for three key topics, with a focus on earthen flood defences (also known as levees or dikes)...","flood defences; levees; asset management; risk-based decision making; Bayesian decision theory; inspection; maintenance; uncertainty reduction; reinforcement; optimization; reliability","en","doctoral thesis","","978-94-6384-313-3","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:4328fd28-3592-433b-a63f-4f6537da2cee","http://resolver.tudelft.nl/uuid:4328fd28-3592-433b-a63f-4f6537da2cee","Surrogate DC Microgrid Models for Optimization of Charging Electric Vehicles under Partial Observability","Veviurko, G. (TU Delft Algorithmics); Böhmer, J.W. (TU Delft Algorithmics); Mackay, Laurens (DC Opportunities R&D); de Weerdt, M.M. (TU Delft Algorithmics)","","2022","Many electric vehicles (EVs) are using today’s distribution grids, and their flexibility can be highly beneficial for the grid operators. This flexibility can be best exploited by DC power networks, as they allow charging and discharging without extra power electronics and transformation losses. From the grid control perspective, algorithms for planning EV charging are necessary. This paper studies the problem of EV charging planning under limited grid capacity and extends it to the partially observable case. We demonstrate how limited information about the EV locations in a grid may disrupt the operation planning in DC grids with tight constraints. We introduce two methods to change the grid topology such that partial observability of the EV locations is resolved. The suggested models are evaluated on the IEEE 16 bus system and multiple randomly generated grids with varying capacities. The experiments show that these methods efficiently solve the partially observable EV charging planning problem and offer a trade-off between computational time and performance.","DC microgrid; partial observability; electric vehicle; optimization","en","journal article","","","","","","","","","","","Algorithmics","","",""
"uuid:54d7ccfa-6f29-4213-a80a-b5dda73f1e04","http://resolver.tudelft.nl/uuid:54d7ccfa-6f29-4213-a80a-b5dda73f1e04","Optimization of complex-geometry high-rise buildings based on wind load analysis","Estrado, Erron (Student TU Delft); Turrin, M. (TU Delft Design Informatics); Eigenraam, P. (TU Delft Structural Design & Mechanics)","","2022","As technology advances, architects often employ innovative, non-standard shapes in their designs for the fast-growing number of high-rise buildings. Conversely, climate change is bringing about an increasing number of dangerous wind events causing damage to buildings and their surroundings. These factors further complicate the already difficult field of structural wind analysis. Current methods for calculating structural wind response, such as the Eurocode, do not provide methods for unconventional building shapes or, in the case of physical wind tunnel test and in-depth computational fluid dynamics (CFD) simulation, they are prohibitively expensive and time-consuming. Thus, wind load analysis is often relegated to late in the design process. This paper presents the development of a computational method to analyze the effect of wind on the structural behavior of a 3D building model and optimize the external geometry to reduce those effects at an early design phase. It combines CFD, finite-element analysis (FEA), and an optimization algorithm in the popular parametric design tool, Grasshopper. This allows it to be used in an early design stage for performance-based design exploration in complement to the more traditional late-stage methods outlined above. After developing the method and testing the timeliness and precision of the CFD, and FEA portions on case study buildings, the tool was able to output an optimal geometry as well as a database of improved geometric options with their corresponding performance for the wind loading.","Computational fluid dynamics; computational wind engineering; finite-element analysis; generative design; optimization","en","journal article","","","","","","","","","","","Design Informatics","","",""
"uuid:4708fa71-d53d-4a4d-9c21-8da7b7f608e0","http://resolver.tudelft.nl/uuid:4708fa71-d53d-4a4d-9c21-8da7b7f608e0","Marine Biofuels Costs and Emissions Study for the European Supply Chain Till 2030","Gartland, Nicolas (Student TU Delft); Pruyn, J.F.J. (TU Delft Ship Design, Production and Operations)","","2022","The design and preliminary estimations of biomass supply chains are essential in matching energy supply to energy demand. This is especially true of novel/future fuels and technologies in large industries. In this paper, a Mixed Integer Linear Programming (MILP) model was formulated to represent biofuel supply chains across Europe for the production of three novel marine fuels and to allow the selection of fuel conversion technologies, biomass supply locations, and the logistics of transportation from resources to conversion and from conversion to final markets. On top of this, the total production costs and emissions were calculated and compared to current marine fuels to assess the implementation potential and feasibility of these fuels. The MILP model was used to design and analyze optimal distribution and conversion systems, using a realistic data-set covering the European member states and 15 of the largest bunkering ports in the EU. The results showed that on average, the fuels obtained a 72% greenhouse gas (GHG) reduction compared to a fossil fuel comparator and ranged from 22–36 €/GJ in total production costs. It was also discovered that forestry residues were the best-suited biomass for the production of these fuels and that Poland had the highest supply potential of all considered states. The available supply of biomass was sufficient for the demand in the foreseeable future, the largest impediment to the adoption of these fuels is the available refining potential in Europe.","biomass; MILP; supply-chain; optimization; bio-ethanol; bio-methanol; bio-LNG","en","journal article","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:097e50d8-e151-4221-bcc3-bb34d1c6ac89","http://resolver.tudelft.nl/uuid:097e50d8-e151-4221-bcc3-bb34d1c6ac89","Local Stackelberg equilibrium seeking in generalized aggregative games","Fabiani, Filippo (University of Oxford); Tajeddini, Mohammad Amin (University of Tehran); Kebriaei, Hamed (University of Tehran); Grammatico, S. (TU Delft Team Bart De Schutter; TU Delft Team Sergio Grammatico)","","2022","We propose a two-layer, semi-decentralized algorithm to compute a local solution to the Stackelberg equilibrium problem in aggregative games with coupling constraints. Specifically, we focus on a single-leader, multiple follower problem, and after equivalently recasting the Stackelberg game as a mathematical program with complementarity constraints (MPCC), we iteratively convexify a regularized version of the MPCC as inner problem, whose solution generates a sequence of feasible descent directions for the original MPCC. Thus, by pursuing a descent direction at every outer iteration, we establish convergence to a local Stackelberg equilibrium. Finally, the proposed algorithm is tested on a numerical case study, a hierarchical instance of the charging coordination problem of Plug-in Electric Vehicles (PEVs).","Approximation algorithms; Convergence; Cost function; Couplings; game theory; Games; hierarchical systems; optimization; Stackelberg equilibrium; Standards; Wireless networks","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Team Bart De Schutter","","",""
"uuid:1cae12f0-aae2-4ac9-9ab6-585611a4d1be","http://resolver.tudelft.nl/uuid:1cae12f0-aae2-4ac9-9ab6-585611a4d1be","Evolved interactions stabilize many coexisting phases in multicomponent liquids","Zwicker, David (Max Planck Institute for Dynamics and Self-Organization); Laan, L. (TU Delft BN/Liedewij Laan Lab)","","2022","Phase separation has emerged as an essential concept for the spatial organization inside biological cells. However, despite the clear relevance to virtually all physiological functions, we understand surprisingly little about what phases form in a system of many interacting components, like in cells. Here we introduce a numerical method based on physical relaxation dynamics to study the coexisting phases in such systems. We use our approach to optimize interactions between components, similar to how evolution might have optimized the interactions of proteins. These evolved interactions robustly lead to a defined number of phases, despite substantial uncertainties in the initial composition, while random or designed interactions perform much worse. Moreover, the optimized interactions are robust to perturbations, and they allow fast adaption to new target phase counts. We thus show that genetically encoded interactions of proteins provide versatile control of phase behavior. The phases forming in our system are also a concrete example of a robust emergent property that does not rely on fine-tuning the parameters of individual constituents.","biomolecular condensates; droplets; optimization; statistical physics","en","journal article","","","","","","","","","","","BN/Liedewij Laan Lab","","",""
"uuid:cd878ce0-6b22-4bdd-988f-fec16c81a741","http://resolver.tudelft.nl/uuid:cd878ce0-6b22-4bdd-988f-fec16c81a741","Advisory-Based Time Slot Management System to Mitigate Waiting Time at Container Terminal Gates","Nadi Najafabadi, A. (TU Delft Transport and Planning); Nugteren, Alex (Student TU Delft); Snelder, M. (TU Delft Transport and Planning); van Lint, J.W.C. (TU Delft Transport and Planning); Rezaei, J. (TU Delft Transport and Logistics)","","2022","This paper introduces an advisory-based time slot management system (TSMS) to control truck arrivals at seaport terminals with the aim to reduce congestion at terminal gates. A modeling framework is proposed, developed, and applied to assess the impact of a truck arrival shift for a case study in the Port of Rotterdam. This system is designed to apply control policies on truck inflow while taking the behavioral aspect of truck operating companies (TOCs) into account. Discrete choice modeling is used to infer the time-of-day preferences of TOCs for container pick-ups from the exchange of information between port and hinterland stakeholders. These preferences are used to shift truck arrivals to the off-peak period which consequently reduces the high waiting time of trucks at terminals gates. To evaluate the effectiveness of the designed TSMS, a simulation platform that resembles terminal operations has been developed using discrete-event simulation. For the allocation of trucks to a certain time of day, a choice-based stochastic assignment heuristic is designed to approximate the optimum configuration of the truck arrival shift policy experiment. The optimum truck arrival shift design shows that significant gain can be obtained even at a low shift rate.","carrier; freight systems; logistics; marine; model/modeling; optimization; port; seaports; simulation; terminals; truck; trucking industry research","en","journal article","","","","","","","","","","","Transport and Planning","","",""
"uuid:e645b25f-716a-4fa2-8b91-6803ac781d0f","http://resolver.tudelft.nl/uuid:e645b25f-716a-4fa2-8b91-6803ac781d0f","Graph machine learning for design of high-octane fuels","Rittig, J. (Rheinisch-Westfälische Technische Hochschule); Ritzert, Martin (Aarhus University); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering); Winkler, Stefanie (Rheinisch-Westfälische Technische Hochschule); Weber, J.M. (TU Delft Pattern Recognition and Bioinformatics); Morsch, Philipp (Rheinisch-Westfälische Technische Hochschule); Heufer, Karl Alexander (Rheinisch-Westfälische Technische Hochschule); Grohe, Martin (Rheinisch-Westfälische Technische Hochschule); Mitsos, Alexander (Rheinisch-Westfälische Technische Hochschule; Forschungszentrum Jülich GmbH); Dahmen, Manuel (Forschungszentrum Jülich GmbH)","","2022","Fuels with high-knock resistance enable modern spark-ignition engines to achieve high efficiency and thus low CO2 emissions. Identification of molecules with desired autoignition properties indicated by a high research octane number and a high octane sensitivity is therefore of great practical relevance and can be supported by computer-aided molecular design (CAMD). Recent developments in the field of graph machine learning (graph-ML) provide novel, promising tools for CAMD. We propose a modular graph-ML CAMD framework that integrates generative graph-ML models with graph neural networks and optimization, enabling the design of molecules with desired ignition properties in a continuous molecular space. In particular, we explore the potential of Bayesian optimization and genetic algorithms in combination with generative graph-ML models. The graph-ML CAMD framework successfully identifies well-established high-octane components. It also suggests new candidates, one of which we experimentally investigate and use to illustrate the need for further autoignition training data.","computer-aided molecular design; fuel design; graph machine learning; graph neural networks; machine learning; optimization; renewable fuels; spark-ignition engines","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:e766fef1-2ef2-4158-ba62-73953869aab7","http://resolver.tudelft.nl/uuid:e766fef1-2ef2-4158-ba62-73953869aab7","WARio: efficient code generation for intermittent computing","Kortbeek, V. (TU Delft Embedded Systems); Ghosh, Souradip (Carnegie Mellon University); Hester, Josiah (Northwestern University); Campanoni, Simone (Northwestern University); Pawełczak, Przemysław (TU Delft Embedded Systems)","Jhala, Ranjit (editor); Dillig, Isil (editor)","2022","Intermittently operating embedded computing platforms powered by energy harvesting require software frameworks to protect from errors caused by Write After Read (WAR) dependencies. A powerful method of code protection for systems with non-volatile main memory utilizes compiler analysis to insert a checkpoint inside each WAR violation in the code. However, such software frameworks are oblivious to the code structure - -and therefore, inefficient - -when many consecutive WAR violations exist. Our insight is that by transforming the input code, i.e., moving individual write operations from unique WARs close to each other, we can significantly reduce the number of checkpoints. This idea is the foundation for WARio: a set of compiler transformations for efficient code generation for intermittent computing. WARio, on average, reduces checkpoint overhead by 58%, and up to 88%, compared to the state of the art across various benchmarks.","battery-free; code transformation; compiler; embedded system; intermittent computing; optimization","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Embedded Systems","","",""
"uuid:fb8c99cc-24d6-4718-8986-95833ffc1f49","http://resolver.tudelft.nl/uuid:fb8c99cc-24d6-4718-8986-95833ffc1f49","Balancing and redispatch: the next stepping stones in European electricity market integration: Improving the market design and the efficiency of the procurement of balancing and redispatch services","Poplavskaya, K. (TU Delft Energie and Industrie)","De Vries, Laurens (promotor); Weijnen, M.P.C. (promotor); Delft University of Technology (degree granting institution)","2021","Balancing and redispatch are essential services for the security and stability of the electricity network. Balancing refers to continuously maintaining a balance between supply and demand through activating flexible resources. Redispatch refers to changing the dispatch of generators to remedy network congestion. The need for flexibility resources for balancing and congestion management is ever more pressing due to several policy, market and technological aspects.
In a time of the fast-paced, massive transformation that is the energy transition, the electricity system and network are becoming more vulnerable to disturbances, requiring more flexibility. In this dissertation, we test the hypothesis that the efficiency of procurement can be improved with the help of market design adjustments. Thus, the author explores the following main question:
How can market design changes help transmission system operators procure balancing and redispatch services in a more economically efficient manner?
The answer to the main research question is subdivided into two parts: the first one studying a well-defined and well-established balancing market and the second one, building upon the analysis produced in the former, addresses issues related to redispatch. For this, market modelling was combined with analytical and empirical approaches to study the procurement of the two services.
Market harmonization and network integration are developing rapidly in the EU, creating new challenges for the electricity system. This dissertation addresses key issues that system operators, regulators, policymakers and market participants face in the electricity markets today and provides practical recommendations as to how market design can be improved and what other measures are required to ensure economic efficiency. The developed tools provide new means of decision support for energy system stakeholders.
This study does not only contribute to improving network security through market design but, by helping reduce system costs, contributes to the overall economic welfare and the achievement of EU policy goals. Finally, it provides the scientific community with the insights and methodological know-how, in particular in the field of agent-based modelling and machine learning, for the study of numerous future questions in the area of electricity market design, bidder incentives and market integration.
1 of tissue alongside cerebral blood flow and arterial transit time in pseudo-continuous arterial spin labeling","Bladt, Piet (Universiteit Antwerpen); den Dekker, A.J. (TU Delft Team Raf Van de Plas; Universiteit Antwerpen); Clement, Patricia (Universiteit Gent); Achten, Eric (Universiteit Gent); Sijbers, Jan (Universiteit Antwerpen)","","2019","Multi-post-labeling-delay pseudo-continuous arterial spin labeling (multi-PLD PCASL) allows for absolute quantification of the cerebral blood flow (CBF) as well as the arterial transit time (ATT). Estimating these perfusion parameters from multi-PLD PCASL data is a non-linear inverse problem, which is commonly tackled by fitting the single-compartment model (SCM) for PCASL, with CBF and ATT as free parameters. The longitudinal relaxation time of tissue T1t is an important parameter in this model, as it governs the decay of the perfusion signal entirely upon entry in the imaging voxel. Conventionally, T1t is fixed to a population average. This approach can cause CBF quantification errors, as T1t can vary significantly inter- and intra-subject. This study compares the impact on CBF quantification, in terms of accuracy and precision, of either fixing T1t, the conventional approach, or estimating it alongside CBF and ATT. It is shown that the conventional approach can cause a significant bias in CBF. Indeed, simulation experiments reveal that if T1t is fixed to a value that is 10% off its true value, this may already result in a bias of 15% in CBF. On the other hand, as is shown by both simulation and real data experiments, estimating T1t along with CBF and ATT results in a loss of CBF precision of the same order, even if the experiment design is optimized for the latter estimation problem. Simulation experiments suggest that an optimal balance between accuracy and precision of CBF estimation from multi-PLD PCASL data can be expected when using the two-parameter estimator with a fixed T1t value between population averages of T1t and the longitudinal relaxation time of blood T1b.","cerebral blood flow; experimental design; optimization; perfusion models; pseudo-continuous arterial spin labeling","en","journal article","","","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:66730769-c35f-4183-ba01-ea28af12233d","http://resolver.tudelft.nl/uuid:66730769-c35f-4183-ba01-ea28af12233d","Bayesian Machine Learning in metamaterial design: Fragile becomes supercompressible","Bessa, M.A. (TU Delft (OLD) MSE-5); Głowacki, Piotr (Student TU Delft); Houlder, Michael (Student TU Delft)","","2019","Designing future-proof materials goes beyond a quest for the best. The next generation of materials needs to be adaptive, multipurpose, and tunable. This is not possible by following the traditional experimentally guided trial-and-error process, as this limits the search for untapped regions of the solution space. Here, a computational data-driven approach is followed for exploring a new metamaterial concept and adapting it to different target properties, choice of base materials, length scales, and manufacturing processes. Guided by Bayesian machine learning, two designs are fabricated at different length scales that transform brittle polymers into lightweight, recoverable, and supercompressible metamaterials. The macroscale design is tuned for maximum compressibility, achieving strains beyond 94% and recoverable strengths around 0.1 kPa, while the microscale design reaches recoverable strengths beyond 100 kPa and strains around 80%. The data-driven code is available to facilitate future design and analysis of metamaterials and structures (https://github.com/mabessa/F3DAS).","additive manufacturing; data-driven design; deep learning; machine learning; optimization","en","journal article","","","","","","","","","","","(OLD) MSE-5","","",""
"uuid:cbde185b-7612-4915-b13f-47adb099b0b2","http://resolver.tudelft.nl/uuid:cbde185b-7612-4915-b13f-47adb099b0b2","Integration of Genetic Algorithm and Monte Carlo Simulation for System Design and Cost Allocation Optimization in Complex Network","Baladeh, Aliakbar Eslami (MAPNA Group, Tehran); Khakzad, N. (TU Delft Safety and Security Science)","","2019","Complex networks play a vital role in reliability analysis of real-world applications, demanding for precise and accurate analysis methods for optimal allocations of cost and reliability. Since the configuration of a system may change with every feasible solution of cost allocation optimization equation, finding the best arrangement of the system can become very challenging. This paper presents a novel methodology by combining Genetic Algorithm (GA) and Monte Carlo (MC) simulation approaches to simultaneously optimize cost allocation and system configuration in complex network. GA is used to generate configuration-cost pairs while MC is used to evaluate the reliability of the system for each pair. The application of the developed methodology is demonstrated for power grids as an example of critical complex networks. The results show that the proposed methodology can be readily used in practice.","complex networks; cost allocation; genetic algorithm; Monte Carlo simulation; optimization; Reliability","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","","","","","","Safety and Security Science","","",""
"uuid:441e1c01-5762-49e6-b9bf-05c1985fe545","http://resolver.tudelft.nl/uuid:441e1c01-5762-49e6-b9bf-05c1985fe545","Characterization of aerodynamic performance of ducted wind turbines: A numerical study","Dighe, V.V. (TU Delft Wind Energy); De Oliveira Andrade, G.L. (TU Delft Wind Energy); Avallone, F. (TU Delft Wind Energy); van Bussel, G.J.W. (TU Delft Wind Energy)","","2019","The complex aerodynamic interactions between the rotor and the duct has to be accounted for the design of ducted wind turbines (DWTs). A numerical study to investigate the characteristics of flow around the DWT using a simplified duct–actuator disc (AD) model is carried out. Inviscid and viscous flow calculations are performed to understand the effects of the duct shape and variable AD loadings on the aerodynamic performance coefficients. The analysis shows that the overall aerodynamic performance of the DWT can be increased by increasing the duct cross-sectional camber. Finally, flow fields using viscous calculations are examined to interpret the effects of inner duct wall flow separation on the overall DWT performance.","CFD; ducted wind turbines; optimization; panel method; RANS","en","journal article","","","","","","","","","","","Wind Energy","","",""
"uuid:f613079c-90a1-47dc-afcb-f6833646ca5a","http://resolver.tudelft.nl/uuid:f613079c-90a1-47dc-afcb-f6833646ca5a","LQG and Gaussian process techniques: For fixed-structure wind turbine control","Bijl, H.J. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); van Wingerden, J.W. (promotor); Delft University of Technology (degree granting institution)","2018","Wind turbines are growing bigger to becomemore cost-efficient. This does increase the severity of the vibrations that are present in the turbine blades, both due to predictable effects like wind shear and tower shadow, and due to less predictable effects like turbulence and flutter. If wind turbines are to become bigger and more cost-efficient, these vibrations need to be reduced. This can be done by installing trailing-edge flaps to the blades. Because of the variety of circumstances which the turbine should operate in, this results in large uncertainties. As such, we need methods that can take stochastic effects into account. Preferably we develop an algorithmthat can learn from online data how the flaps affect the wind turbine and how to optimally control them. A simple prior analysis can be done using a linearized version of the system. In this case it is important to know not only the expected cost (damage) that will be incurred by the wind turbine in various situations, but also the spread of this cost. This can for instance be done by looking at the variance of the cost function. Various expressions are available to analytically calculate this variance. Alternatively, we can prescribe a degree of stability for the system. Due to the limitations of linear approximations of systems, it is more effective to apply nonlinear regression methods. A promising one is Gaussian Process (GP) regression. Given a training set (X, y) it can predict function values f (x¤) for test points x¤. It has its basis in Bayesian probability theory, which allows it to not only make this prediction, but also give information (the variance) about its accuracy. The usual way in which GP regression is applied has a few important limitations. Most importantly, it is computationally intensive, especially when applied to constantly growing data sets. In addition, it has difficulties dealing with noise present in the training input points x. There are methods to solve either of these issues, but these tricks generally do not work well together, or their combination requires many computational resources. However, by making the right approximations, like Taylor expansions and at times even linearizations, Gaussian process regression can be applied efficiently, in an online way, to data sets with noisy input points. This enables GP regression to be used for system identification problems like online non-linear black-box modeling. Another limitation is that it can be difficult to find the optimum of a Gaussian process. The reason is that the optimum of a Gaussian process is not a fixed point but a random variable. The distribution of this optimum cannot be calculated analytically, but we can use particle methods to approximate it. We can subsequently use this principle to efficiently explore an unknown nonlinear function, trying to locate its optimum. To do so, we sample a point x from the optimum distribution, measure what the function value f (x) at this point is, update the Gaussian process approximation of the function, update the optimum distribution and repeat this process until the distribution has converged. Finding the optimum of a function like this has shown to have competitive performance at keeping the cumulative regret low, compared to similar algorithms. In addition, it allows wind turbines to tune the gains of a fixed-structure controller so as to optimize a nonlinear cost function like the damage equivalent load. All these improvements are a step forward in the application of Gaussian process regression to wind turbine applications. But as is always the case with research, there are still many things left to improve further.","Gaussian processes; regression; machine learning; optimization; system identification; automatic control; wind energy; smart rotor","en","doctoral thesis","","978-94-6299-501-7","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:44dda417-a658-47d3-998b-48c082c9e989","http://resolver.tudelft.nl/uuid:44dda417-a658-47d3-998b-48c082c9e989","A tensor approach to linear parameter varying system identification","Gunes, Bilal (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2018","","tensor; LPV; identification; data-driven; wind; turbine; statistics; subspace; optimization; tensor decompositions; multi-linear algebra; SVD; MLSVD; HOSVD; tensor trains; tensor networks; polyadic; engineering; wind energy","en","doctoral thesis","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:6be0d327-6da6-419f-a6a5-19fd44b1245d","http://resolver.tudelft.nl/uuid:6be0d327-6da6-419f-a6a5-19fd44b1245d","Optimization of water allocation in the Shatt al-Arab River under different salinity regimes and tide impact","Abdullah, A.D.A. (University of Missan); Castro-Gama, Mario (IHE Delft Institute for Water Education); Popescu, Ioana (IHE Delft Institute for Water Education; Politehnica University of Timisoara); van der Zaag, P. (TU Delft Water Resources; IHE Delft Institute for Water Education); Karim, Usama (University of Twente); Al Suhail, Qusay (University of Basrah)","","2018","Wastewater effluents from irrigation and the domestic and industrial sectors have serious impacts in deteriorating water quality in many rivers, particularly in areas under tidal influence. There is a need to develop an approach that considers the impact of human and natural causes of salinization. This study uses a multi-objective optimization–simulation model to investigate and describe the interactions of such impacts in the Shatt al-Arab River, Iraq. The developed model is able to reproduce the salinity distribution in the river given varying conditions. The salinity regime in the river varies according to different hydrological conditions and anthropogenic activities. Due to tidal effects, salinity caused by drainage water is seen to intrude further upstream into the river. The applied approach provides a way to obtain optimal solutions where both river salinity and deficit in water supply can be minimized. The approach is used for exploring the trade-off between these two objectives.","drainage water; optimization; salinity; Shatt al-Arab River; tidal influence; water management","en","journal article","","","","","","","","2019-03-31","","","Water Resources","","",""
"uuid:f10d191c-6258-4d63-ab77-2ff9fe86c516","http://resolver.tudelft.nl/uuid:f10d191c-6258-4d63-ab77-2ff9fe86c516","High-Permittivity Pad Design for Dielectric Shimming in Magnetic Resonance Imaging Using Projection-Based Model Reduction and a Nonlinear Optimization Scheme","van Gemert, J.H.F. (TU Delft Microwave Sensing, Signals & Systems); Brink, W.M. (Leiden University Medical Center); Webb, A. (Leiden University Medical Center); Remis, R.F. (TU Delft Signal Processing Systems)","","2018","Inhomogeneities in the transmit radio frequency magnetic field ( {\text{B}}-{1}^{+} ) reduce the quality of magnetic resonance (MR) images. This quality can be improved by using high-permittivity pads that tailor the {\text{B}}-{1}^{+} fields. The design of an optimal pad is application-specific and not straightforward and would therefore benefit from a systematic optimization approach. In this paper, we propose such a method to efficiently design dielectric pads. To this end, a projection-based model order reduction technique is used that significantly decreases the dimension of the design problem. Subsequently, the resulting reduced-order model is incorporated in an optimization method in which a desired field in a region of interest can be set. The method is validated by designing a pad for imaging the cerebellum at 7 T. The optimal pad that is found is used in an MR measurement to demonstrate its effectiveness in improving the image quality.","dielectric shimming; fields; high-permittivity pads; Magnetic resonance imaging; optimization; reduced order modeling","en","journal article","","","","","","Accepted author manuscript","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:1b747787-0319-4120-be10-0640f344ec5e","http://resolver.tudelft.nl/uuid:1b747787-0319-4120-be10-0640f344ec5e","A Graph Theoretic Approach to Optimal Firefighting in Oil Terminals","Khakzad, N. (TU Delft Safety and Security Science)","","2018","Effective firefighting of major fires in fuel storage plants can effectively prevent or delay fire spread (domino effect) and eventually extinguish the fire. If the number of firefighting crew and equipment is sufficient, firefighting will include the suppression of all the burning units and cooling of all the exposed units. However, when available resources are not adequate, fire brigades would need to optimally allocate their resources by answering the question “which burning units to suppress first and which exposed units to cool first?” until more resources become available from nearby industrial plants or residential communities. The present study is an attempt to answer the foregoing question by developing a graph theoretic methodology. It has been demonstrated that suppression and cooling of units with the highest out-closeness index will result in an optimum firefighting strategy. A comparison between the outcomes of the graph theoretic approach and an approach based on influence diagram has shown the efficiency of the graph approach.","oil storage plants; domino effect; firefighting; optimization; graph theory; influence diagram","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:367f977d-ba14-44be-9c4e-93eba5af508f","http://resolver.tudelft.nl/uuid:367f977d-ba14-44be-9c4e-93eba5af508f","Kinetic modeling and optimization of parameters for biomass pyrolysis: A comparison of different lignocellulosic biomass","Mahmood, Hamayoun (University of Engineering & Technology Lahore); Ramzan, Naveed (University of Engineering & Technology Lahore); Shakeel, A. (TU Delft Rivers, Ports, Waterways and Dredging Engineering; University of Engineering & Technology Lahore); Moniruzzaman, Muhammad (Universiti Teknologi Petronas); Iqbal, Tanveer (University of Engineering & Technology Lahore); Kazmi, Mohsin Ali (University of Engineering & Technology Lahore); Sulaiman, Muhammad (University of Engineering & Technology Lahore)","","2018","A primitive element for the development of sustainable pyrolysis processes is the study of thermal degradation kinetics of lignocellulosic waste materials for optimal energy conversion. The study presented here was conducted to predict and compare the optimal kinetic parameters for pyrolysis of various lignocellulosic biomass such as wood sawdust, bagasse, rice husk, etc., under both isothermal and non-isothermal conditions. The pyrolysis was simulated over the temperature range of 500–2400 K for isothermal process and for heating rate range of 25–165 K/s under non-isothermal conditions to assess the maximum pyrolysis rate of virgin biomass in both cases. Results revealed that by increasing the temperature, the pyrolysis rate was enhanced. However, after a certain higher temperature, the pyrolysis rate was diminished which could be due to the destruction of the active sites of char. Conversely, a decrease in the optimum pyrolysis rate was noted with increasing reaction order of the virgin biomass. Although each lignocellulosic material attained its maximum pyrolysis rate at the optimum conditions of 1071 K and 31 K/s for isothermal and non-isothermal conditions, respectively, but under these conditions, only wood sawdust exhibited complete thermal utilization and achieved final concentrations of 0.000154 and 0.001238 under non-isothermal and isothermal conditions, respectively.","kinetic modeling; lignocellulolsic residue; optimization; Pyrolysis","en","journal article","","","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:581ece88-c8c9-4455-81d4-856de2be2caa","http://resolver.tudelft.nl/uuid:581ece88-c8c9-4455-81d4-856de2be2caa","A string-based representation and crossover operator for evolutionary design of dynamical mechanisms","Kuppens, P.R. (TU Delft Mechatronic Systems Design); Wolfslag, W.J. (TU Delft Learning & Autonomous Control)","","2018","Robots would perform better when their mechanical structure is specifically designed for their designated task, for instance by adding spring mechanisms. However, designing such mechanisms, which match the dynamics of the robot with the task, is hard and time consuming. To assist designers, a platform that automatically designs dynamical mechanisms is needed. This letter introduces a novel string-based representation for mechanisms, including evolutionary operators, that allows an evolutionary algorithm to automatically design dynamical mechanisms for a designated task. The mechanism representation allows simultaneous optimization of topology and parameters. Simulation experiments investigate various algorithms to obtain best optimization performance. We show the efficacy of the representation, operators, and evolutionary algorithm by designing mechanisms that track straight lines and ellipses by virtue of both their kinematic and dynamic properties.","dynamics; Mechanism design; optimal control; optimization","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2018-07-31","","","Mechatronic Systems Design","","",""
"uuid:1397c49e-4df9-4ff2-84d7-6a8511757062","http://resolver.tudelft.nl/uuid:1397c49e-4df9-4ff2-84d7-6a8511757062","Numerical thermal analysis and optimization of multi-chip LED module using response surface methodology and genetic algorithm","Tang, H. (TU Delft Electronic Components, Technology and Materials); Ye, Huai-Yu (Chongqing University); Chen, Xian-Ping (Chongqing University); Qian, Cheng (Chinese Academy of Sciences; Changzhou Institute of Technology Research for Solid State Lighting); Fan, Xue-Jun (Lamar University); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials)","","2017","In this paper, the heat transfer performance of the multi-chip (MC) LED module is investigated numerically by using a general analytical solution. The configuration of the module is optimized with genetic algorithm (GA) combined with a response surface methodology. The space between chips, the thickness of the metal core printed circuit board (MCPCB), and the thickness of the base plate are considered as three optimal parameters, while the total thermal resistance (Rtot) is considered as a single objective function. After optimizing objectives with GA, the optimal design parameters of three types of MC LED modules are determined. The results show that the thickness of MCPCB has a stronger influence on the total thermal resistance than other parameters. In addition, the sensitivity analysis is performed based on the optimum data. It reveals thatRtot increases with the increased thickness of MCPCB, and reduces as the space between chips increases. The effect of the thickness of base plate is far less than that of the thickness of MCPCB. After optimization, three types of MC LED modules obtain lower Tj andRtot. Moreover, the optimized modules can emit large luminous energy under high-power input conditions. Therefore, the optimization results are of great significance in the selection of configuration parameters to improve the performance of the MC LED module.","genetic algorithm; Multi-chip LED module; optimization; response surface methodology; thermal resistance; OA-Fund TU Delft","en","journal article","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:e6fc3865-531f-4ea9-aeff-e2ef923ae36f","http://resolver.tudelft.nl/uuid:e6fc3865-531f-4ea9-aeff-e2ef923ae36f","Modeling, design and optimization of flapping wings for efficient hovering flighth","Wang, Q. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Goosen, J.F.L. (copromotor); Delft University of Technology (degree granting institution)","2017","Inspired by insect flights, flapping wing micro air vehicles (FWMAVs) keep attracting attention from the scientific community. One of the design objectives is to reproduce the high power efficiency of insect flight. However, there is no clear answer yet to the question of how to design flapping wings and their kinematics for power-efficient hovering flight. In this thesis, we aim to answer this research question from the perspectives of wing modeling, design and optimization.
Quasi-steady aerodynamic models play an important role in evaluating aerodynamic performance and designing and optimizing flapping wings. In Chapter 2, we present a predictive quasi-steady model by including four aerodynamic loading terms. The loads result from the wing's translation, rotation, their coupling as well as the added-mass effect. The necessity of including all four of these terms in a quasi-steady model to predict both the aerodynamic force and torque is demonstrated. Validations indicate a good accuracy of predicting the center of pressure, the aerodynamic loads and the passive pitching motion for various Reynolds numbers. Moreover, compared to the existing quasi-steady models, the proposed model does not rely on any empirical parameters and, thus, is more predictive, which enables application to the shape and kinematics optimization of flapping wings.
For flapping wings with passive pitching motion, a shift in the pitching axis location alters the aerodynamic loads, which in turn change the passive pitching motion and the flight efficiency. Therefore, in Chapter 3, we investigate the optimal pitching axis location for flapping wings to maximize the power efficiency during hovering flight. Optimization results show that the optimal pitching axis is located between the leading edge and the mid-chord line, which shows a close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to optimized traditional wings used by most of the flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. In addition, the optimized pitching axis enables the drive system to recycle more energy during the deceleration phases as compared to their counterparts. This observation underlines the particular importance of the wing pitching axis location for energy-efficient FWMAVs when using kinetic energy recovery drive systems.
The presence of wing twist can alter the aerodynamic performance and power efficiency of flapping wings by changing the angle of attack. In order to study the optimal twist of flapping wings for hovering flight, we propose a computationally efficient fluid-structure interaction (FSI) model in Chapter 4. The model uses an analytical twist model and the quasi-steady aerodynamic model introduced in Chapter 2 for the structural and aerodynamic analysis, respectively. Based on the FSI model, we optimize the twist of a rectangular wing by minimizing the power consumption during hovering flight. The power efficiency of the optimized twistable wings is compared with corresponding optimized rigid wings. It is shown that the optimized twistable wings can not dramatically outperform the optimized rigid wings in terms of power efficiency, unless the pitching amplitude at the wing root is limited. When this amplitude decreases, the optimized twistable wings can always maintain high power efficiency by introducing certain twist while the optimized rigid wings need more power for hovering.
Considering the high impact of the root stiffness on flapping kinematics and power consumption, we present an active hinge design which uses electrostatic force to change the hinge stiffness in Chapter 5. The hinge is realized by stacking three conducting spring steel layers which are separated by dielectric Mylar films. The theoretical model shows that the stacked layers can switch from slipping with respect to each other to sticking together when the resultant electrostatic force between layers, which can be controlled by the applied voltage, is above a threshold value. The switch from slipping to sticking will result in a dramatic increase of the hinge stiffness (about 9x). Therefore, a short duration of the sticking can still lead to a considerable change in the passive pitching motion. Experimental results successfully show the decrease of the pitching amplitude with the increase of the applied voltage. Flight control based on the electrostatic force can be very power-efficient since there is ideally no power consumption due to the control operations.
In Chapter 6, we retrospect and discuss the most important aspects related to the modeling, design and optimization of flapping wings for efficient hovering flight. In Chapter 7, the overall conclusions are drawn and recommendations for further study are provided.","flapping wing; passive pitching; pitching axis; aerodynamic model; power efficiency; optimization","en","doctoral thesis","","978-94-92516-57-2","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:c49b634b-6f5a-4828-9d32-403d1df42abf","http://resolver.tudelft.nl/uuid:c49b634b-6f5a-4828-9d32-403d1df42abf","Warping NMPC for online generation and tracking of optimal trajectories","Lago, Jesus (TU Delft Team Bart De Schutter; VITO-Energyville); Erhard, Michael (SkySails Power; University of Freiburg); Diehl, Moritz (Albert-Ludwigs-Universität Freiburg)","","2017","Generation of feasible and optimal reference trajectories is crucial in tracking Nonlinear Model Predictive Control. Especially, for stability and optimality in presence of a time varying parameter, adaptation of the tracking trajectory has to be implemented. General approaches are real-time generation of trajectories or switching between a discrete set of precomputed trajectories. In order to circumvent the operational efforts of these methods for a special type of dynamical systems, we propose time warping as an alternative approach. This algorithm implements online generation of tracking trajectories by warping a single precomputed reference. In detail, warpable systems, feasibility and optimality of trajectories and the controller implementation are discussed. Finally, as an application example, simulation results of a tethered kite system for airborne wind energy generation are presented.","optimal trajectory; optimization; Predictive control; renewable energy systems","en","journal article","","","","","","","","","","","Team Bart De Schutter","","",""
"uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","http://resolver.tudelft.nl/uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","Computationally efficient analysis & design of optimally compact gear pairs and assessment of gear compliance","Amani, A. (TU Delft Emerging Materials)","Spitas, C. (promotor); Spitas, Vasilios (promotor); Delft University of Technology (degree granting institution)","2016","","gear design; spur gear; design parameters; pitch compatibility; interference; corner contact; pointed tip; undercutting; non-standard; non-dimensional; design guidelines; highest point of single tooth contact (HPSTC); finite element analysis; stress analysis; bending strength; compact gears; optimization; centre distance; deviation; tolerance zone; computational modelling; compact gear drive; compliance; bending compliance; foundational compliance; Hertzian compliance; non-dimensional modelling; Saint-Venant's Principle; cubic Hermitian interpolation","en","doctoral thesis","","978-94-6186-739-1","","","","","","2018-11-15","","","Emerging Materials","","",""
"uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","http://resolver.tudelft.nl/uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems","Fitiwi, D.Z. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Rivier Abbad, M. (promotor); Delft University of Technology (degree granting institution)","2016","","transmission expansion planning; uncertainty and variability; optimization; stochastic programming; moments technique; clustering","en","doctoral thesis","","978-84-608-9955-6","","","","","","","","","Energie and Industrie","","",""
"uuid:0010fdac-32ec-459b-bb9b-3e6327a85496","http://resolver.tudelft.nl/uuid:0010fdac-32ec-459b-bb9b-3e6327a85496","Gradient-based optimization of flow through porous media: Version 3","Jansen, J.D. (TU Delft Geoscience and Engineering)","","2016","These notes form part of the course material for the MSc course AES1490 ""Advanced Reservoir Simulation"" which has been taught at TU Delft over the past decade as part of the track ""Petroleum Engineering and Geosciences"" in the two-year MSc program ""Applied Earth Sciences"".
The notes cover the gradient-based optimization of subsurface flow. In particular they treat optimization methods in which the gradient information is obtained with the aid of the adjoint method, which is, in essence, an efficient numerical implementation of implicit differentiation in a multivariate setting.
Chapter 1 reviews the basic concepts of multivariate optimization and demonsrates the equivalence of the Lagrange multiplier method for constrained optimization and the use of implicit differentiation to obtain gradients in the presence of constraints.
Chapter 2 introduces the use of Lagrange multipliers and implicit differentiation for the optimization of large-scale numerical systems with the adjoint method. In particular it addresses the optimization of oil recovery from subsurface reservoirs represented as reservoir simulation models, i.e. space- and time-discretized numerical representations of the nonlinear partial differential equations that govern multi-phase flow through porous media. It also covers the use of robust adjoint-based optimization to cope with the inherent uncertainty in subsurface flow models and addresses some numerical implementation aspects.
Chapter 3 gives a brief overview of various further topics related to gradient-based optimization of subsurface flow, such as closed-loop reservoir management and hierarchical optimization of short-term and long term reservoir performance.
97%) with any given configuration (capacity, data width and frequency). Besides these better than worst-case current measures, we also propose a generic post-manufacturing power and performance characterization methodology for DRAMs that can help identify the realistic current estimates and optimized set of timing measures for a given DRAM device, thereby further improving the accuracy of the power and energy estimates for that particular DRAM device. To optimize DRAM power consumption, we propose a set of performance-neutral DRAM power-down strategies coupled with a power management policy that for any given use-case (access granularity, page policy and memory type) achieves significant power savings without impacting its worst-case performance (bandwidth and latency) guarantees. We verify the pessimism in DRAM currents and four critical DRAM timing parameters as provided in the datasheets, by experimentally evaluating 48 DDR3 devices of the same configuration. We further derive optimal set of timings using the performance characterization algorithm, at which the DRAM can operate successfully under worst-case run-time conditions, without increasing its energy consumption. We observed up to of 33.3% and 25.9% reduction in DRAM read and write latencies and 17.7% and 15.4% improvement in energy efficiency. We validate DRAMPower model against a circuit-level DRAM power model and verify it against real power measurements from hardware for different DRAM operations. We observed between 1-8% difference in power estimates, with an average of 97% accuracy. We also evaluated the power-management policy and power-down strategies and observed significant energy savings (close to theoretical optimal) at very marginal average-case performance penalty without impacting any of the original latency and bandwidth guarantees.","DRAM; power; energy; estimation; optimization; modeling; variation","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","",""
"uuid:3beba71b-7e19-4277-bdd7-752c43f867af","http://resolver.tudelft.nl/uuid:3beba71b-7e19-4277-bdd7-752c43f867af","Cost optimal river dike design using probabilistic methods","Bischiniotis, K.; Kanning, W.; Jonkman, S.N.","","2014","This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and the requirements that are set by the user. Even though there are many ways that may provoke the dike failure, the literature study showed that the failure mechanisms that contribute most to the failure of the typical Dutch river dikes are overflowing, piping and inner slope stability. Based on these, the most important design variables of the dike cross-section dimensions are set and following probabilistic design methods, the probability of failure of many different dike cross-sections is estimated taking into account the abovementioned failure mechanisms. Different cross-section configurations may all comply with a set target probability of failure. Of these, the cross-section that results in the lowest cost is considered the optimal. This approach is applied to several representative dikes, each of which gives a different optimal design, depending on the local boundary conditions. The method shows that the use of probabilistic optimization gives more cost-efficient designs than the traditional partial safety factor designs.","river dike; optimization; probabilistic design; cross-section; failure probability","en","conference paper","Brazilian Water Resources Association and Acquacon Consultoria.","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:9dff055c-eb6d-4005-a052-fce8aaeea792","http://resolver.tudelft.nl/uuid:9dff055c-eb6d-4005-a052-fce8aaeea792","Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity","Maher, G.D.; Hulshoff, S.J.","","2014","The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain values for the parameters of subgrid-scale models that are nonlinear in their coefficients. Specifically, the Newton-Raphson method is employed. A least-squares minimization formulation of the Germano Identity is developed to resolve issues that occur when the residual is positive and negative over different regions of the domain. In this case a Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is used to solve the minimization problem. The developed method is applied to the one-dimensional unsteady forced Burgers’ equation and the two-dimensional steady Stokes’ equations. It is shown that the Newton-Raphson method and BFGS algorithm generally solve, or minimize the residual of, the Germano relations in a relatively small number of iterations. The optimized subgridscale models are shown to outperform standard SGS models with respect to a L2 error. Additionally, the nonlinear SGS models tend to achieve lower L2 errors than the linear models.","subgrid-scale model; variational multiscale method; variational Germano identity; optimization; turbulence","en","conference paper","CIMNE","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy & Propulsion","","","",""
"uuid:4f9ed7f0-05e1-4cbc-8992-d91dc6c914d7","http://resolver.tudelft.nl/uuid:4f9ed7f0-05e1-4cbc-8992-d91dc6c914d7","Validation and Optimization of a Design Formula for Stable Geometrically Open Filter Structures","Van de Sande, S.A.H.; Uijttewaal, W.S.J.; Verheij, H.J.","","2014","Granular filters are used for protection against scour and erosion of base material. For a proper functioning it is necessary that at the interfaces between the filter structure, the subsoil and the water flowing above the filter structure no material will be transported. Different types of granular filters can be distinguished, this paper focuses on stable geometrically open filter structures under current attack. Hoffmans (2012) developed a design formula for stable geometrically open filters. This paper presents the validation and an optimization of the design formula based on performed model tests. It is shown that the current design formula is too conservative. The proposed improvements allows for a wider range of applicability.","filter; granular filter; geometrically open filter; open filter; interface stability; bed protection; design formula; stability; optimization; ICCE 2014","en","conference paper","Coastal Engineering Research Council","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:cb6544e8-02f9-403c-8540-698b7af9a185","http://resolver.tudelft.nl/uuid:cb6544e8-02f9-403c-8540-698b7af9a185","Rolling horizon predictions of bus trajectories","Oshyani, M.F.; Cats, O.","","2014","Bus travel times are subject to inherent and recurrent uncertainties. A real-time prediction scheme regarding how the transit system evolves will potentially facilitate more adaptive operations as well as more adaptive passengers’ decisions. This scheme should be tractable, sufficiently fast and reliable to be used in real time applications. For this purpose, a heuristic hybrid scheme for departure time estimation is proposed in this study. The predic-tion generated by the proposed hybrid scheme consists of three travel time components: schedule, instantaneous and historical data sources. Genetic algorithm is applied in order to specify the contribution of each data source component to the prediction scheme. The pro-posed scheme was applied for a trunk bus line in Stockholm, Sweden. In addition, the current-ly deployed scheme was replicated in order to compare the performance of both schemes. The results suggest that the proposed scheme reduces the overall mean absolute error by almost 20%. Moreover the proposed scheme provides better predictions except for very long term predictions where both schemes yield the same performance.","prediction; bus departure time; optimization; travel time and genetic algorithm","en","conference paper","National Technical University of Athens (NTUA)","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","",""
"uuid:650ec0d0-4613-4dae-96b1-1f685dff0e60","http://resolver.tudelft.nl/uuid:650ec0d0-4613-4dae-96b1-1f685dff0e60","Automatic Hardware Generation for Reconfigurable Architectures","Nane, R.","Bertels, K.L.M. (promotor)","2014","Reconfigurable Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons. First, processor clock frequencies reached threshold values past which power dissipation becomes a very difficult problem to solve. As a consequence, alternatives were sought to keep improving the system performance. Second, because Field-Programmable Gate Arrays (FPGAs) technology substantially improved (e.g., increase in transistors per mm2), system designers were able to use them for an increasing number of (complex) applications. However, the adoption of reconfigurable devices brought with itself a number of related problems, of which the complexity of programming can be considered an important one. One approach to program an FPGA is to implement an automatically generated Hardware Description Language (HDL) code from a High-Level Language (HLL) specification. This is called High-Level Synthesis (HLS). The availability of powerful HLS tools is critical to managing the ever-increasing complexity of emerging RA systems to leverage their tremendous performance potential. However, current hardware compilers are not able to generate designs that are comparable in terms of performance with manually written designs. Therefore, to reduce this performance gap, research on how to generate hardware modules efficiently is imperative. In this dissertation, we address the tool design, integration, and optimization of the DWARV 3.0 HLS compiler. Dissimilar to previous HLS compilers, DWARV 3.0 is based on the CoSy compiler framework. As a result, this allowed us to build a highly modular and extendible compiler in which standard or custom optimizations can be easily integrated. The compiler is designed to accept a large subset of C-code as input and to generate synthesizable VHDL code for unrestricted application domains. To enable DWARV 3.0 third-party tool-chain integration, we propose several IP-XACT (i.e., a XML-based standard used for tool-interoperability) extensions such that hardware-dependent software can be generated and integrated automatically. Furthermore, we propose two new algorithms to optimize the performance for different input area constraints, respectively, to leverage the benefits of both jump and predication schemes from conventional processors adapted for hardware execution. Finally, we performed an evaluation against state-of-the-art HLS tools. Results show that application execution time wise, DWARV 3.0 performs, on average, the best among the academic compilers.","high-level synthesis; hardware; reconfigurable; architecture; compiler; survey; dwarv; HLS; optimization","en","doctoral thesis","CPI Koninklijke Wohrmann","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Engineering","","","",""
"uuid:d063dfb9-6ec6-4c43-b315-fb98a576498a","http://resolver.tudelft.nl/uuid:d063dfb9-6ec6-4c43-b315-fb98a576498a","Model-based Feedforward Control for Inkjet Printheads","Khalate, A.A.","Babuska, R. (promotor); Bombois, X. (promotor)","2013","In recent years, inkjet technology has emerged as a promising manufacturing tool. This technology has gained its popularity mainly due to the facts that it can handle diverse materials and it is a non-contact and additive process. Moreover, the inkjet technology offers low operational costs, easy scalability, digital control and low material waste. Thus, apart from conventional document printing, the inkjet technology has been successfully applied as a micro-manufacturing tool in the areas of electronics, mechanical engineering, and life sciences. In this thesis, we investigate a piezo-based drop-on-demand (DoD) printhead which is commonly used for industrial and commercial applications due to its ability to handle diverse materials. A typical drop-on-demand (DoD) inkjet printhead consists of several ink channels in parallel. Each ink channel is provided with a piezo-actuator which on the application of an actuation voltage pulse, generates pressure oscillations inside the ink channel. These pressure oscillations push the ink drop out of the nozzle. The print quality delivered by an inkjet printhead depends on the properties of the jetted drop, i.e., the drop velocity, the drop volume and the jetting direction. To meet the challenging performance requirements posed by new applications, these drop properties have to be tightly controlled. The performance of the inkjet printhead is limited by two factors. The first one is the residual pressure oscillations. The actuation pulses are designed to provide an ink drop of a specified volume and velocity under the assumption that the ink channel is in a steady state. Once the ink drop is jetted the pressure oscillations inside the ink channel take several micro-seconds to decay. If the next ink drop is jetted before these residual pressure oscillations have decayed, the resulting drop properties will be different from the ones of the previous drop. The second limiting factor is the cross-talk. The drop properties through an ink channel are affected when the neighboring channels are actuated simultaneously. Generally, the drop consistency is improved by manual tuning of the piezo actuation pulse based on some physical insight or based on exhaustive experimental studies on the printhead. However, these ad-hoc procedures have proved to be insufficient in dealing with the above limitations. In this thesis, a model-based control approach is proposed to improve the performance of a DoD inkjet printhead. It offers a systematic and efficient means to improve the attainable performance of a DoD inkjet printhead by reducing the effect of the residual oscillations and the cross-talk. Furthermore, the models that have been developed for this purpose can also give new insights into the operation of the printhead. In order to achieve this goal, it is required to have a fairly accurate and simple model of an inkjet printhead. It is not easy to obtain a good physical model for an inkjet printhead due to insufficient knowledge of the complex interactions in the printhead. Therefore, in this thesis, we have used system identification, i.e. we use experimental measurements in order to develop a model. For this purpose, it is required that the piezo-actuator is also used as a sensor. Note that the crucial aspect in the model development is to obtain a model of the inkjet system close to its operating conditions. Therefore, we have collected measurements of the piezo sensor signal during the jetting of a series of drops at a given DoD frequency. For the printhead under investigation, we found that the dynamics of the ink channel are dependent on the DoD frequency. This phenomenon is caused by non-linearities in the droplet formation. Consequently, we have modeled the ink channel dynamics for every DoD frequency. In this thesis, it is shown that the set of local inkjet models obtained at different DoD frequencies can be encompassed by a polytopic uncertainty on the parameters of a nominal model. Using the same identification procedure, the cross-talk can also be modeled. In order to improve the printhead performance the actuation pulse was redesigned. The new drive pulse is designed to provide good performance for all models in the area of uncertainty by means of robust feedforward control. The pulse also respects the pulse shape constraints posed by driving electronics (ASICS). Besides the robust actuation pulse, our approach also introduces an optimal delay between actuation of neighboring channels to reduce the cross-talk. The current driving electronics limits the possibilities of reshaping the actuation pulse. Since it is expected that this limitation will be relaxed in the future, we have also developed procedure to design a robust pulse without pulse shape constraints. The performance improvement achieved with this unconstrained pulse has proved to be quite limited. The proposed method is also useful for inkjet practitioners who do not have any insight in the inkjet dynamics. The efficacy of our approach is demonstrated by our experimental results. The proposed method was verified in practice by jetting a series of ink drops at various DoD frequencies and also by jetting a bitmap image. For the printhead under consideration, the drop-consistency is improved by almost four times with the proposed approach when compared to the conventional methods.","inkjet printhead; identification; feedforward control; robust control; optimization","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","http://resolver.tudelft.nl/uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","Averaging Level Control to Reduce Off-Spec Material in a Continuous Pharmaceutical Pilot Plant","Lakerveld, R.; Benyahia, B.; Heider, P.L.; Zhang, H.; Braatz, R.D.; Barton, P.I.","","2013","The judicious use of buffering capacity is important in the development of future continuous pharmaceutical manufacturing processes. The potential benefits are investigated of using optimal-averaging level control for tanks that have buffering capacity for a section of a continuous pharmaceutical pilot plant involving two crystallizers, a combined filtration and washing stage and a buffer tank. A closed-loop dynamic model is utilized to represent the experimental operation, with the relevant model parameters and initial conditions estimated from experimental data that contained a significant disturbance and a change in setpoint of a concentration control loop. The performance of conventional proportional-integral (PI) level controllers is compared with optimal-averaging level controllers. The aim is to reduce the production of off-spec material in a tubular reactor by minimizing the variations in the outlet flow rate of its upstream buffer tank. The results show a distinct difference in behavior, with the optimal-averaging level controllers strongly outperforming the PI controllers. In general, the results stress the importance of dynamic process modeling for the design of future continuous pharmaceutical processes.","control; process modeling; process simulation; parameter estimation; dynamic modeling; optimization; crystallization; continuous pharmaceutical manufacturing","en","journal article","MDPI","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:f30bd41b-4b44-4459-ab68-d913fffdb8e9","http://resolver.tudelft.nl/uuid:f30bd41b-4b44-4459-ab68-d913fffdb8e9","Estimation of primaries by sparse inversion incuding the ghost","Verschuur, D.J.","","2013","Today, the problem of surface-related multiples, especially in shallow water, is not fully solved. Although surface-related multiple elimination (SRME) method has proved to be successful on a large number of data cases, the involved adaptive subtraction acts as a weak link in this methodology, where primaries can be distorted due to their interference with multiples. Therefore, recently, SRME has been redefined as a large-scale inversion process, called estimation of primaries by sparse inversion (EPSI). In this process the multi-dimensional primary impulse responses are considered as the unknowns in a largescale inversion process. By parameterizing these impulse responses as spikes in the space-time domain, and using a sparsity constraint in the update step, the algorithm looks for those primaries that, together with their associated multiples, explain the total input data. As the objective function in this minimization process truly goes to zero, the tendency for distorting primaries is greatly reduced. An additional advantage is that imperfections in the data can be included in the forward model and resolved simultaneously, such as the missing near offsets. In this paper it is demonstrated that the ghost effect can also be included in the EPSI formulation after which a ghost-free primary estimate can be obtained, even in the case the ghost notch is within the desired spectrum.","acquisition; inversion; multiples; optimization; wave equation","en","journal article","Society of Exploration Geophysicists","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:5ede00e1-9101-49ea-9a2f-81b99291b110","http://resolver.tudelft.nl/uuid:5ede00e1-9101-49ea-9a2f-81b99291b110","Risk approach to land reclamation: Feasibility of a polder terminal","Lendering, K.T.; Jonkman, S.N.; Peters, D.J.","","2013","New ports are mostly constructed on low lying coastal areas or shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting ‘convention-al terminal’ requires large volumes of fill material often dredged from the sea, which is costly. The terminal yard of a ‘polder terminal’ lies below the outside water level and is surrounded by a quay wall flood defense structure. This saves large amounts of reclamation cost but introduces higher damage potential during flood-ing and thus an increased flood risk. A risk-based framework is made to determine the optimal quay wall and polder level, which is an optimization (cost benefit analysis) under two variables. Overtopping failure proves to be the dominant failure mechanism for flooding. The reclamation savings prove to be larger than the in-creased flood risk demonstrating that the polder terminal could be an attractive alternative to the conventional terminal.","container terminals; flood risks; optimization; polder terminals; probabilistic design","en","conference paper","CRC Press/Balkema - Taylor & Francis Group","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:6bf9ad22-c4a5-4f5f-8006-fce525935f04","http://resolver.tudelft.nl/uuid:6bf9ad22-c4a5-4f5f-8006-fce525935f04","Cloud-Based Design Analysis and Optimization Framework","Mueller, V.; Strobbe, T.","","2013","Integration of analysis into early design phases in support of improved building performance has become increasingly important. It is considered a required response to demands on contemporary building design to meet environmental concerns. The goal is to assist designers in their decision making throughout the design of a building but with growing focus on the earlier phases in design during which design changes consume less effort than similar changes would in later design phases or during construction and occupation.Multi-disciplinary optimization has the potential of providing design teams with information about the potential trade-offs between various goals, some of which may be in conflict with each other. A commonly used class of optimization algorithms is the class of genetic algorithms which mimic the evolutionary process. For effective parallelization of the cascading processes occurring in the application of genetic algorithms in multi-disciplinary optimization we propose a cloud implementation and describe its architecture designed to handle the cascading tasks as efficiently as possible.","cloud computing; design analysis; optimization; generative design; building performance","en","conference paper","","","","","","","","","","","","","",""
"uuid:241873a0-ad14-43f8-a135-e2c133622c2f","http://resolver.tudelft.nl/uuid:241873a0-ad14-43f8-a135-e2c133622c2f","Biological Computation for Digital Design and Fabrication: A biologically-informed finite element approach to structural performance and material optimization of robotically deposited fibre structures","Oxman, N.; Laucks, J.; Kayser, M.; Uribe, C.D.G.; Duro-Royo, J.","","2013","The formation of non-woven fibre structures generated by the Bombyx mori silkworm is explored as a computational approach for shape and material optimization. Biological case studies are presented and a design approach for the use of silkworms as entities that can compute fibrous material organization is given in the context of an architectural design installation. We demonstrate that in the absence of vertical axes the silkworm can spin flat silk patches of variable shape and density. We present experiments suggesting sufficient correlation between topographical surface features, spinning geometry and fibre density. The research represents a scalable approach for optimization-driven fibre-based structural design and suggests a biology-driven strategy for material computation.","biologically computed digital fabrication; robotic fabrication; finite element analysis; optimization; CNC weaving","en","conference paper","","","","","","","","","","","","","",""
"uuid:7d81abad-fcbe-4094-871a-54755ee0f03e","http://resolver.tudelft.nl/uuid:7d81abad-fcbe-4094-871a-54755ee0f03e","Packing Optimization for Digital Fabrication","Dritsas, S.; Kalvo, R.; Sevtsuk, A.","","2013","We present a design-computation method of design-to-production automation and optimization in digital fabrication; an algorithmic process minimizing material use, reducing fabrication time and improving production costs of complex architectural form. Our system compacts structural elements of variable dimensions within fixed-size sheets of stock material, revisiting a classical challenge known as the two-dimensional bin-packing problem. We demonstrate improvements in performance using our heuristic metric, an approach with potential for a wider range of architectural and engineering design-built digital fabrication applications, and discuss the challenges of constructing free-form design efficiently using operational research methodologies.","design computation; digital fabrication; automation; optimization","en","conference paper","","","","","","","","","","","","","",""
"uuid:76b9b6db-926c-479e-9031-ed4abf2324df","http://resolver.tudelft.nl/uuid:76b9b6db-926c-479e-9031-ed4abf2324df","A Computational Method for Integrating Parametric Origami Design and Acoustic Engineering","Takenaka, T.; Okabe, A.","","2013","This paper proposes a computational form-finding method for integrating parametric origami design and acoustic engineering to find the best geometric form of a concert hall. The paper describes an application of this method to a concert hall design project in Japan. The method consists of three interactive subprograms: a parametric origami program, an acoustic simulation program, and an optimization program. The advantages of the proposed method are as follows. First, it is easy to visualize engineering results obtained from the acoustic simulation program. Second, it can deal with acoustic parameters as one of the primary design materials as well as origami parameters and design intentions. Third, it provides a final optimized geometric form satisfying both architectural design and acoustic conditions. The method is valuable for generating new possibilities of architectural form by shifting from a traditional form-making process to a form-finding process.","interactive design method; parametric origami; acoustic simulation; optimization; quadrat count method","en","conference paper","","","","","","","","","","","","","",""
"uuid:38379080-da96-4acd-a86d-f3b8f492dd1b","http://resolver.tudelft.nl/uuid:38379080-da96-4acd-a86d-f3b8f492dd1b","Algorithmic Engineering in Public Space","Hulin, J.; Pavlicek, J.","","2013","The paper reflects on a relationship between an algorithmic and a standard (intuitive) approach to design of public space. A realized project of a plaza renovation in Czech town Vsetin is described as a study case. The paper offers an overview of benefits and drawbacks of the algorithmic approach in the described study case and it outlines more general conclusions.","algorithm; public space; circle packing; optimization; pavement","en","conference paper","","","","","","","","","","","","","",""
"uuid:3bfab3e0-d826-44c5-81da-f06c33ee0299","http://resolver.tudelft.nl/uuid:3bfab3e0-d826-44c5-81da-f06c33ee0299","A Case Study in Teaching Construction of Building Design Spaces","Nicknam, M.; Bernal, M.; Haymaker, J.","","2013","Until recently, design teams were constrained by tools and schedule to only be able to generate a few alternatives, and analyze these from just a few perspectives. The rapid emergence of performance-based design, analysis, and optimization tools gives design teams the ability to construct and analyze far larger design spaces more quickly. This creates new opportunities and challenges in the ways we teach and design. Students and professionals now need to learn to formulate and execute design spaces in efficient and effective ways. This paper describes curriculum that was taught in a course 8803 Multidisciplinary Analysis and Optimization taught by the authors at Schools of Architecture and Building Construction at Georgia Tech in spring 2013. We approach design as a multidisciplinary design space formulation and search process that seeks maximum value. To explore design spaces, student designers need to execute several iterative processes of problem formulation, generate alternative, analyze them, visualize trade space, and address decision-making. The paper first describes students design space exploration experiences, and concludes with our observations of the current challenges and opportunities.","design space exploration; teaching; multidisciplinary; optimization; analysis","en","conference paper","","","","","","","","","","","","","",""
"uuid:25459ba0-fe3a-444c-847a-34ad5c41ab9f","http://resolver.tudelft.nl/uuid:25459ba0-fe3a-444c-847a-34ad5c41ab9f","Integrating Computational and Building Performance Simulation Techniques for Optimized Facade Designs","Gadelhak, M.","","2013","This paper investigates the integration of Building Performance Simulation (BPS) and optimization tools to provide high performance solutions. An office room in Cairo, Egypt was chosen as a base testing case, where a Genetic Algorithm (GA) was used for optimizing the annual daylighting performance of two parametrically modeled daylighting systems. In the first case, a combination of a redirecting system (light shelf) and shading system (solar screen) was studied. While in the second, a free-form gills surface was also optimized to provide acceptable daylighting performance. Results highlight the promising future of using computational techniques along with simulation tools, and provide a methodology for integrating optimization and performance simulation techniques at early design stages.","High performance facade; daylighting simulation; optimization; form finding; genetic algorithm","en","conference paper","","","","","","","","","","","","","",""
"uuid:1d9c4022-dbd6-4452-9842-4649c1fdd432","http://resolver.tudelft.nl/uuid:1d9c4022-dbd6-4452-9842-4649c1fdd432","A Freight Transport Model for Integrated Network, Service, and Policy Design","Zhang, M.","Tavasszy, L.A. (promotor)","2013","“The goal of the European Transport Policy is to establish a sustainable transport system that meets society’s economic, social and environmental needs
” (ECE, 2009). This statement indicates the challenges that the European transport policy makers are faced with when facilitating an increasing freight transport demand with limited transport infrastructures. The development of an interconnected intermodal transport system has been recognized by the European Commission as an important, strategic task that will contribute to solving the dilemma between the accommodation of an increased freight flow and the need for a sustainable living environment. This thesis focuses on model-based, quantitative analysis for infrastructure network design decisions for large scale intermodal transport systems.. The involvement of public concerns, as represented by the governmental objectives on sustainability, brings additional complexity into infrastructure network design. Governments are often concerned with network design on a regional scale or a national scale. The enlargement of the network scale to an international level further increases the level of heterogeneity of the network, among other factors in terms of the number of actors involved, the diversity of transport demand and the variety of transport service supply. These new objectives and dimensions pose new challenges to freight transport infrastructure network design. This thesis proposes a new model to support policy making for an intermodal freight transport network. The model is able to simultaneously incorporate large scale, multimodal, multi-commodity and multi-actor perspectives. It can be used for integrated policy, infrastructure and service design. Results can be visualized per transport mode and per commodity value group on a geographic information system at segmental level, terminal level, corridor level, regional level, national level, and network level. Implementation of the model for a realistic scale network design is another contribution of this thesis. To this end, we calibrated the model by using two approaches: a Genetic Algorithm based method and a feedback-based method. The model was validated by comparing the modelled link flows with observations, testing the cross elasticities of the costs to demand and comparing the catchment area of the terminals with areas observed in practice. The calibration results indicate that the model adequately captures the network usage decisions on an aggregated level. The model was applied to Dutch container transport network design problems. Databases of Dutch container transport demand, features of the European multimodal freight transport infrastructure network, information about selected inland waterway transport services, and information about transport and transhipment costs, emissions and external costs were embedded in the model. After completing the theoretical and empirical specification the model was applied to policy decisions on the Dutch container transport. The thesis extensively discusses the integrated infrastructure, service, and policy design that may contribute to managing the costs of the freight flows, meanwhile ensuring a sustainable living environment. The main findings from the application are as follows. - A higher CO2 price can results in lower total transport costs, despite extra handling costs in intermodal transhipments. The costs saved by bundling freight and using intermodal transport can compensate the additional handling costs. As these cannot compensate for the internalized CO2 emission costs, the total operational costs borne by transport operators will increase. - Network efficiency can be increased by closing terminals that are not able to attract sufficient volumes of demand. However, it is not likely to happen in practice, due to the fact that the private terminal operators and the local governments have local interests to protect on those small terminals that may conflict with the objective of minimizing total network costs. - The hub-network-services assumed and tested in this study cannot compete with road transport or shuttle barge transport services in the base scenario due to the extra transhipment costs, low load factor, and low demand for IWW container transport. In a future scenario, these services are only feasible under very high traffic growth. - There is not one single optimal future infrastructure network. Instead, a good infrastructure network design mainly depends on the future demand, transport price, and development of new transport technology. Based on the conclusions drawn in this thesis, implementing the combination of CO2 pricing and terminal network configuration is more effective than solely implementing CO2 pricing, with regard to total network CO2 emissions. A range of efficient networks, forming a frontier of minimal total network costs and total network CO2 emissions, is presented in the thesis, instead of one single optimal solution. The frontier provides more options in terminal network optimization in terms of the target network performance. The question which is the optimal network will depend on the relative value placed on CO2 emissions. The thesis ends with a vision on future freight transport network design models. A potential research direction is to incorporate the dimension of time into the model. This extension will enable the model to capture dynamic demand; to be applicable for scheduling synchronized intermodal transport services; to provide more realistic estimations of transport emissions and to analyse network reliability, including network robustness and service robustness. Reference: CEC (2009) 'COMMUNICATION FROM THE COMMISSION: A sustainable future for transport: Towards an integrated, technology-led and user friendly system', Commission of the European Communities, Brussels.","freight; transport; network design; optimization; GIS; service network; transport policy","en","doctoral thesis","TRAIL Research School","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","",""
"uuid:0feb1f50-32ae-4e54-87ea-3b551497389e","http://resolver.tudelft.nl/uuid:0feb1f50-32ae-4e54-87ea-3b551497389e","Risk based design of land reclamation and the feasibility of the polder terminal","Lendering, K.; Jonkman, S.N.; Peters, D.J.","","2013","New ports are mostly constructed on low lying coastal areas or in shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting ‘conventional terminal’ requires large volumes of good quality fill material often dredged from the sea, which is costly. The alternative concept of a ‘polder terminal’ has a terminal yard which lies below the outside water level and is surrounded by a quay wall flood defence structure. This saves large amounts of reclamation investment but introduces a higher damage potential in case of flooding and corresponding flood risk. Important conditions for the feasibility of a polder terminal are low pervious subsoil and high reclamation cost. Further, a polder terminal requires a water storage and drainage system, against additional cost. A risk-based analysis of the optimal quay wall height and polder level is performed, which is an optimization (cost benefit analysis) under two variables. The overtopping failure mechanism proves to be the dominant failure mechanism for flooding. During overtopping the water depth in the polder terminal is larger than on the conventional terminal, resulting in higher damage potential and corresponding flood risk for the polder terminal. However, the reclamation savings prove to be larger than the increased flood risk: the ‘polder terminal’ could save 10 to 30% of the total cost (investment and risk) demonstrating that it to be an economically attractive alternative to a conventional terminal.","container terminals; flood risks; optimization; polder terminals; probabilistic design","en","conference paper","Institute for Research and Community Service","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:56a64800-0dde-42fd-a2f1-05ed7c357b0b","http://resolver.tudelft.nl/uuid:56a64800-0dde-42fd-a2f1-05ed7c357b0b","An Optimization Model for Simultaneous Periodic Timetable Generation and Stability Analysis","Sparing, D.; Goverde, R.M.P.; Hansen, I.A.","","2013","We present an optimization model which is able to generate feasible periodic timetables for networks given the line structure and the requested line frequencies, taking into account infrastructure constraints and train overtake locations. As the model uses the minimum cycle time as the objective function, the stability of the timetable is also simultaneously expressed. Dimension reduction techniques are presented taking advantage of the symmetries of periodic timetables. The model is applied to a case study of a dense corridor with heterogeneous traffic.","timetable design; timetable stability; optimization","en","conference paper","International Association of Railway Operations Research (IAROR)","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","",""
"uuid:3e2cb6d7-3ba2-4b45-af71-2fa106b5d189","http://resolver.tudelft.nl/uuid:3e2cb6d7-3ba2-4b45-af71-2fa106b5d189","Optimal Usage of Multiple Energy Carriers in Residential Systems: Unit Scheduling and Power Control","Ramirez-Elizondo, L.M.","Van der Sluis, L. (promotor)","2013","The world’s increasing energy demand and growing environmental concerns have motivated scientists to develop new technologies and methods to make better use of the remaining resources of our planet. The main objective of this dissertation is to develop a scheduling and control tool at the district level for small-scale systems with multiple energy carriers and to apply exergy-related concepts for the optimization of these systems. The tool is based on the energy hub approach and provides insights and techniques that can be used to evaluate new district energy scenarios. The topics that are presented include the multicarrier unit commitment framework, the multi-carrier exergy hub approach, a hierarchical multi-carrier control architecture, a comparison of multi-carrier power applications and the implementation of a multi-carrier energy management system in a real infrastructure.","optimization; multiple energy-carriers; renewables; sustainable energy","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Sustainable Energy","","","",""
"uuid:fcc290f8-cf60-44a4-be68-189f29a2fb82","http://resolver.tudelft.nl/uuid:fcc290f8-cf60-44a4-be68-189f29a2fb82","Estimates of extremes in the best of all possible worlds","Van Nooyen, R.R.P.; Kolechkina, A.G.","","2012","In applied hydrology the question of the probability of exceeding a certain value occurs regularly. Often it is in a context where extrapolation from a relatively short time series is needed. It is well known that in its simplest form extreme value theory applies to independent identically distributed random variables. It is also well known that more advanced theory allows for some degrees of correlation and that techniques for coping with trends are available. However, the problem of extrapolation remains. To isolate the effect of extrapolation we generate synthetic time series of length 20, 50 and 100 from known distributions to derive empirical distributions for the 1:100 and 1:1000 exceedance.","extremes; estimators; optimization; statistical distributions","en","conference paper","STAHY","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:93af1749-0b97-416a-ba27-907ae4921a7f","http://resolver.tudelft.nl/uuid:93af1749-0b97-416a-ba27-907ae4921a7f","Using particle packing technology for sustainable concrete mixture design","Fennis, S.A.A.M.; Walraven, J.C.","","2012","The annual production of Portland cement, estimated at 3.4 billion tons in 2011, is responsible for about 7% of the total worldwide CO2-emission. To reduce this environmental impact it is important to use innovative technologies for the design of concrete structures and mixtures. In this paper, it is shown how particle packing technology can be used to reduce the amount of cement in concrete by concrete mixture optimization, resulting in more sustainable concrete. First, three different methods to determine the particle distribution of a mixture are presented; optimization curves, particle packing models and discrete element modelling. The advantage of using analytical particle packing models is presented based on relations between packing density, water demand and strength. Experiments on ecological concrete demonstrate how effective particle packing technology can be used to reduce the cement content in concrete. Three concrete mixtures with low cement content were developed and the compressive strength, tensile strength, modulus of elasticity, shrinkage, creep and electrical resistance was determined. By using particle packing technology in concrete mixture optimization, it is possible to design concrete in which the cement content is reduced by more than 50% and the CO2-emission of concrete is reduced by 25%.","aggregate; cement spacing; concrete; flowability; particle packing; optimization","en","journal article","Heron","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","http://resolver.tudelft.nl/uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","Sequential robust optimization of a V-bending process using numerical simulations","Wiebenga, J.H.; Van den Boorgaard, A.H.; Klaseboer, G.","","2012","The coupling of finite element simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a generally applicable strategy for modeling and efficiently solving robust optimization problems based on time consuming simulations. Noise variables and their effect on the responses are taken into account explicitly. The robust optimization strategy consists of four main stages: modeling, sensitivity analysis, robust optimization and sequential robust optimization. Use is made of a metamodel-based optimization approach to couple the computationally expensive finite element simulations with the robust optimization procedure. The initial metamodel approximation will only serve to find a first estimate of the robust optimum. Sequential optimization steps are subsequently applied to efficiently increase the accuracy of the response prediction at regions of interest containing the optimal robust design. The applicability of the proposed robust optimization strategy is demonstrated by the sequential robust optimization of an analytical test function and an industrial V-bending process. For the industrial application, several production trial runs have been performed to investigate and validate the robustness of the production process. For both applications, it is shown that the robust optimization strategy accounts for the effect of different sources of uncertainty onto the process responses in a very efficient manner. Moreover, application of the methodology to the industrial V-bending process results in valuable process insights and an improved robust process design.","metal forming processes; finite element method; optimization; uncertainty; robustness; sequential optimization","en","journal article","Springer-Verlag","","","","","","","","Mechanical, Maritime and Materials Engineering","Materials Innovation Institute","","","",""
"uuid:aa419ba5-3d31-4d73-adf3-c79870deccc7","http://resolver.tudelft.nl/uuid:aa419ba5-3d31-4d73-adf3-c79870deccc7","Optimal Adaptive Policymaking under Deep Uncertainty? Yes we can!","Hamarat, C.; Kwakkel, J.H.; Pruyt, E.","","2012","Uncertainty manifests itself in almost every aspect of decision making. Adaptive and flexible policy design becomes crucial under uncertainty. An adaptive policy is designed to be flexible and can be adapted over time to changing circumstances and unforeseeable surprises. A crucial part of an adaptive policy is the monitoring system and associated pre-specified actions to be taken in response to how the future unfolds. However, the adaptive policymaking literature remains silent on how to design this monitoring system and how to specify appropriate values that will trigger the pre-specified responses. These trigger values have to be chosen such that the resulting adaptive plan is robust and flexible to surprises in the future. Actions should be neither triggered too early nor too late. One possible family of techniques for specifying triggers is optimization. Trigger values would then be the values that maximize the extent of goal achievement across a large ensemble of scenarios. This ensemble of scenarios is generated using Exploratory Modeling and Analysis. In this paper, we show how optimization can be useful for the specification of trigger values. A Genetic Algorithm is used because of its flexibility and efficiency in complex and irregular solution spaces. The proposed approach is illustrated for the transitions of the energy system towards a more sustainable functioning which requires effective dynamic adaptive policy design. The main aim of this paper is to show the contribution of optimization for adaptive policy design.","adaptive policymaking; exploratory modeling and analysis; optimization","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:a53f5bbd-2640-41cb-982d-b05a6fff9166","http://resolver.tudelft.nl/uuid:a53f5bbd-2640-41cb-982d-b05a6fff9166","Manifold mapping optimization with of without true gradients","Delinchant, B.; Lahaye, D.; Wurtz, F.; Coulomb, J.L.","","2012","This paper deals with the Space Mapping optimization algorithms in general and with the Manifold Mapping technique in particular. The idea of such algorithms is to optimize a model with a minimum number of each objective function evaluations using a less accurate but faster model. In this optimization procedure, fine and coarse models interact at each iteration in order to adjust themselves in order to converge to the real optimum. The Manifold Mapping technique guarantees mathematically this convergence but requires gradients of both fine and coarse model. Approximated gradients can be used for some cases but are subject to divergence. True gradients can be obtained for many numerical model using adjoint techniques, symbolic or automatic differentiation. In this context, we have tested several Manifold Mapping variants and compared their convergence in the case of real magnetic device optimization.","space mapping; manifold mapping; optimization; surrogate model; gradients; symbolic derivation; automatic differentiation","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:9a018e13-f29e-4597-8870-6f8ab2fa9787","http://resolver.tudelft.nl/uuid:9a018e13-f29e-4597-8870-6f8ab2fa9787","Multi-Objective Optimization for Urban Drainage Rehabilitation","Barreto Cordero, W.J.","Price, R.K. (promotor); Solomatine, D.P. (promotor)","2012","Flooding in urbanized areas has become a very important issue around the world. The level of service (or performance) of urban drainage systems (UDS) degrades in time for a number of reasons. In order to maintain an acceptable performance of UDS, early rehabilitation plans must be developed and implemented. In developing countries the situation is serious, little investment is done and there are smaller funds each year for rehabilitation. The allocation of such funds must be “optimal” in providing value for money. However this task is not easy to achieve due to the multicriteria nature of the rehabilitation process, taking into account technical, environmental and social interests. Most of the time these are conflicting, which make it a highly demanding task. The present book introduce a framework to deal with multicriteria decision making for the rehabilitation of urban drainage systems, and focuses on several aspects such as the improvement of the performance of the multicriteria optimization through the inclusion of new features in the algorithms and the proper selection of performance criteria. The use of Genetic Algorithms, parallelization and application in countries like Brazil, Colombia y Venezuela are treated in this book.","multi-objective; urban drainage; optimization; parallel computing,; genetic algorithms","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:b4aee571-0489-42ff-ab55-d74e980f724a","http://resolver.tudelft.nl/uuid:b4aee571-0489-42ff-ab55-d74e980f724a","Shape Parameterization in Aircraft Design: A Novel Method, Based on B-Splines","Straathof, M.H.","Van Tooren, M.J.L. (promotor)","2012","This thesis introduces a new parameterization technique based on the Class-Shape-Transformation (CST) method. The new technique consists of an extension to the CST method in the form of a refinement function based on B-splines. This Class-Shape-Refinement-Transformation (CSRT) method has the same advantages as the original CST method, while also allowing for local deformations in a shape. A number of test cases were performed using two different design frameworks with low and high fidelity. The low fidelity framework was based on a commercial panel method code and coupled to various optimization algorithms. The high fidelity framework used an in-house Euler code and employed adjoint optimization.","shape; parameterization; aircraft; design; B-splines; Class-Shape-Refinement-Transformation; adjoint; euler; optimization","en","doctoral thesis","","","","","","","","2012-02-03","Aerospace Engineering","FPP","","","",""
"uuid:65db30d9-206c-4661-abd2-c645482a8e2d","http://resolver.tudelft.nl/uuid:65db30d9-206c-4661-abd2-c645482a8e2d","Binaural Model-Based Speech Intelligibility Enhancement and Assessment in Hearing Aids","Schlesinger, A.","Gisolf, D. (promotor); Boone, M.M. (promotor)","2012","The enhancement of speech intelligibility in noise is still the main subject in hearing aid research. Based on the advanced results obtained with the hearing glasses, in the present research the speech intelligibility is even further improved by the application of binaural post-filters. The functionalities of these filters are related to the principles of the auditory scene analysis. A statistical analysis of binaural cues in noise at the output of different hearing aids, the utilization of a Bayesian classifier in the source separation process and an evolutionary optimization against binaural models of speech intelligibility provides a comprehensive understanding for the utilization of binaural post-filters in adverse environments. As the listening ease and a fair amount of speech quality are mandatory in speech enhancement, tradeoffs between speech intelligibility and quality were studied in terms of the preservation of natural binaural cues and the suppression of musical noise.","CASA; STI; SII; binaural; genetic algorithm; optimization; Bayesian classification","en","doctoral thesis","TU Delft","","","","","","","2011-12-23","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:dfaae28f-c2dd-4bdc-82d6-a1c1aa98fa26","http://resolver.tudelft.nl/uuid:dfaae28f-c2dd-4bdc-82d6-a1c1aa98fa26","Predicting Storm Surges: Chaos, Computational Intelligence, Data Assimilation, Ensembles","Siek, M.B.L.A.","Solomatine, D.P. (promotor)","2011","Accurate predictions of storm surge are of importance in many coastal areas. This book focuses on data-driven modelling using methods of nonlinear dynamics and chaos theory for predicting storm surges. A number of new enhancements are presented: phase space dimensionality reduction, incomplete time series, phase error correction, finding true neighbours, optimization of chaotic model, data assimilation and multi-model ensembles. These were tested on the case studies in the North Sea and Caribbean Sea. Chaotic models appear to be are accurate and reliable short and mid-term predictors of storm surges aimed at supporting decision-makers for flood prediction and ship navigation.","ocean wave prediction; nonlinear dynamics and chaos theory; neural networks; optimization; dimensionality reduction; phase error correction; incomplete time series; multi-model ensemble prediction; data-driven modelling; computational intelligence; hydroinformatics","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","http://resolver.tudelft.nl/uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","Event-based progression detection strategies using scanning laser polarimetry images of the human retina","Vermeer, K.A.; Lo, B.; Zhou, Q.; Vos, F.M.; Vossepoel, A.M.; Lemij, H.G.","","2011","Monitoring glaucoma patients and ensuring optimal treatment requires accurate and precise detection of progression. Many glaucomatous progression detection strategies may be formulated for Scanning Laser Polarimetry (SLP) data of the local nerve fiber thickness. In this paper, several strategies, all based on repeated GDx VCC SLP measurements, are tested to identify the optimal one for clinical use. The parameters of the methods were adapted to yield a set specificity of 97.5% on real image series. For a fixed sensitivity of 90%, the minimally detectable loss was subsequently determined for both localized and diffuse loss. Due to the large size of the required data set, a previously described simulation method was used for assessing the minimally detectable loss. The optimal strategy was identified and was based on two baseline visits and two follow-up visits, requiring two-out-of-four positive tests. Its associated minimally detectable loss was 5–12?m, depending on the reproducibility of the measurements.","progression detection; simulation; glaucoma; polarimetry; optimization; image processing","en","journal article","Elsevier","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:be0f5746-ff05-42a3-805a-f4a72fef4cc6","http://resolver.tudelft.nl/uuid:be0f5746-ff05-42a3-805a-f4a72fef4cc6","Applying the shuffled frog-leaping algorithm to improve scheduling of construction projects with activity splitting allowed","Tavakolan, M.T.; Ashuri, B.; Chiara, N.","","2011","In situation of contractors competing to finish a given project with the least duration and cost, acquiring the ability to improve the project quality properties seems essential for project managers. Evolutionary Algorithm (EAs) have been applied as suitable algorithms to develop the multi-objective Time-Cost trade-off Optimization (TCO) and Time-Cost-Resource Optimization (TCRO) in the past few decades ; however, by improving EAs, the Shuffled Frog Leaping Algorithm (SFLA) has been introduced as an algorithm capable of achieving a better solution with faster convergence. Furthermore, considering splitting in execution of activities can make models closer to approximating real projects. One example has been used to demonstrate the impact of SFLA and splitting on the results of the model and to compare with previous algorithms. Current research has elucidated that SFLA improves final results and splitting allows the model find suitable solutions.","optimization; multi-objective SFLA; splitting; leveling; construction management","en","conference paper","","","","","","","","","","","","","",""
"uuid:8d7290d3-a903-4cfe-8c12-0387b94a192e","http://resolver.tudelft.nl/uuid:8d7290d3-a903-4cfe-8c12-0387b94a192e","Information Theory for Risk-based Water System Operation","Weijs, S.V.","Van de Giesen, N.C. (promotor)","2011","Operational management of water resources needs predictions of future behavior of water systems, to anticipate shortage or excess of water in a timely manner. Because the natural systems that are part of the hydrological cycle are complex, the predictions inevitably are subject to considerable uncertainty. Still, definitive decisions about e.g. hydropower reservoir releases or polder pump flows have to be made looking ahead into the uncertain future. This demands risk-based approach, in which, ideally, all possible future events should be considered, along with their probabilities that represent the information and uncertainty available at the time of decision. The thesis deals with water, but the flows studied are mostly those of information. Like the flow of water, also information flows obey certain fundamental laws. These are the laws of Information Theory, which also provide guidelines for developing models, handling data, and designing statistical procedures to make predictions and decisions. The information-theoretical perspective used in the thesis leads to the conclusion that predictions should necessarily be probabilistic and should be evaluated using a relative entropy measure, of which an intuitive decomposition into three components is presented. Other chapters in the thesis deal with the use of model predictive control and stochastic dynamic programming for operational water management, the time-dynamics of information, generation of weighted ensemble forecasts that balance uncertainty and information, and a perspective on data compression as philosophy of science. Recommendations for practice and further research indicate that entropy has a bright future, not only as an ever-increasing thermodynamic measure, but also as an information-theoretical measure of uncertainty that is useful in any field where predictions and decisions have to be made in a context of complex and largely unobservable systems.","information theory; operational water management; risk; probabilistic forecasts; optimization; entropy; control; water; hydrology; water resources management","en","doctoral thesis","VSSD","","","","","","","2011-03-29","Civil Engineering and Geosciences","Watermanagement","","","",""
"uuid:58f4d3c3-0a38-4640-aded-51d7bca2396e","http://resolver.tudelft.nl/uuid:58f4d3c3-0a38-4640-aded-51d7bca2396e","Analysis of near-optimal evacuation instructions","Huibregtse, O.L.; Bliemer, M.C.J.; Hoogendoorn, S.P.","","2010","In this paper, approximations of optimal evacuation instructions are analyzed. The instructions, consisting of a departure time, a destination, and a route, are for the evacuation by car of a population of a region threatened by a hazard. An optimization method presented in earlier research is applied on three different hazard scenarios resulting in an instruction set for each scenario. These instruction sets are different because of network degeneration caused by the different hazard scenarios. Analysis of the network occupancy during the evacuations as consequence of the instruction sets shows that the capacity is used in the scenarios for minimal 87%, 90%, and 87% for the period wherein the effect of the network degeneration is relatively small. Although the results are logical, no clear patterns are perceptible in the instructions leading to this network occupancy. This endorses to the viewpoint from the earlier paper, namely, that it is useful to apply an optimization method to create evacuation instructions instead of applying instructions set up by straightforward rules (like evacuating to the nearest destination). Furthermore, it shows the efficiency of this specific optimization method.","evacuation; instructions; optimization","en","journal article","Elsevier","","","","","","","","Civil Engineering and Geosciences","Transport and Planning","","","",""
"uuid:ccc6e7f3-3b21-4f05-a0ca-df8cad6d0ca0","http://resolver.tudelft.nl/uuid:ccc6e7f3-3b21-4f05-a0ca-df8cad6d0ca0","Optimization of sandwich composites fuselages under flight loads","Yan, C.; Bergsma, O.; Koussios, S.; Zu, L.; Beukers, A.","","2010","The sandwich composites fuselages appear to be a promising choice for the future aircrafts because of their structural efficiency and functional integration advantages. However, the design of sandwich composites is more complex than other structures because of many involved variables. In this paper, the fuselage is designed as a sandwich composites cylinder, and its structural optimization using the finite element method (FEM) is outlined to obtain the minimum weight. The constraints include structural stability and the composites failure criteria. In order to get a verification baseline for the FEManalysis, the stability of sandwich structures is studied and the optimal design is performed based on the analytical formulae. Then, the predicted buckling loads and the optimization results obtained froma FEMmodel are compared with that from the analytical formulas, and a good agreement is achieved. A detailed parametric optimal design for the sandwich composites cylinder is conducted. The optimization method used here includes two steps: the minimization of the layer thickness followed by tailoring of the fiber orientation. The factors comprise layer number, fiber orientation, core thickness, frame dimension and spacing. Results show that the two-step optimization is an effective method for the sandwich composites and the foam sandwich cylinder with core thickness of 5 mm and frame pitch of 0.5 m exhibits the minimum weight.","sandwich; composites; stability; optimization; ANOVA","en","journal article","Springer","","","","","","","","Aerospace Engineering","Aerospace Materials and Manufacturing","","","",""
"uuid:c2a93de0-21e4-490b-a18c-09f319c2da17","http://resolver.tudelft.nl/uuid:c2a93de0-21e4-490b-a18c-09f319c2da17","Rigorous simulations of emitting and non-emitting nano-optical structures","Janssen, O.T.A.","Urbach, H.P. (promotor)","2010","In the next decade, several applications of nanotechnology will change our lives. LED lighting is about to replace the common light bulb. The main advantages are its energy efficiency and long lifetime. LEDs can be much more efficient, when part of the emitted light that is currently trapped in the device, could be radiated out of the device. Other devices such as photovoltaic solar cells and biosensors can also be made more efficient and cheaper. LEDs, solar cells and biosensors have in common that they consist of small structures of the order of the wavelength of the light. With such small structures light can be manipulated in a special way. In this thesis, we describe a method to calculate the interaction of light with these small structures. It is shown that an efficient LED which radiates light, can be treated as a solar cell that absorbs as much of the incoming light as possible. On this so-called reciprocity principle, which was discovered by Henrik Antoon Lorentz, a very efficient computational optimalisation method can be based. With this method existing designs of for example LEDs can be made more efficient iteratively. This thesis shows optimized designs of LEDs, solar cells and biosensors.","FDTD; LED; plasmonics; optimization; reciprocity; biosensors","en","doctoral thesis","Optics Research Group","","","","","","","2010-11-09","Applied Sciences","Imaging Science & Technology","","","",""
"uuid:f34c2606-dbae-4182-873b-8c1a99714297","http://resolver.tudelft.nl/uuid:f34c2606-dbae-4182-873b-8c1a99714297","Interval Analysis: Contributions to static and dynamic optimization","De Weerdt, E.","Mulder, J.A. (promotor)","2010","The field of global optimization has been an active one for many years. By far the most applied methods are gradient and evolutionary based algorithms. The most appearing drawback of those types of methods is that one cannot guarantee that the global solution is found within finite time. Moreover, if the global solution is found (by chance), the methods cannot provide a guaranteed feedback to the user stating that the provided solution is the global one. Therefore, no natural stopping conditions are available for most of the existing optimization algorithms. There are, however, other tools available, which do provide the guarantee that the global solution is found and that have natural stopping conditions. Interval analysis in combination with interval arithmetic is such a tool. Interval arithmetic was initially developed to cope with rounding errors in digital computers. Using interval arithmetic, one can perform reliable computing such that catastrophic numeric errors can be prevented (the explosion of the Ariane 5 rocket on June 4, 1996 was caused by a simple numeric overflow). It was soon found, that interval arithmetic could be used to form guaranteed bounds on any type of function or numeric algorithm for any domain. These bounds provide the crucial information needed to perform global optimization. Interval analysis is the group name of all methods that use the information obtained from guaranteed bounds to solve global optimization problems. Developed in the 1960’s, interval analysis gained popularity during the 90’s when digital computers became increasingly powerful. Nowadays, interval analysis has been widely applied in the field of static optimization, i.e. optimization that does not involve differential algebraic equations, and verified integration. However, interval analysis has not been applied often in the field of dynamic optimization. The goal of the research is to investigate whether interval analysis, in combination with interval arithmetic, can be used to solve non-linear, constrained, dynamic optimization problems. Moreover, the possibility of extending existing theory in the field of static optimization is investigated. The focus of the research lies on trajectory optimization (a specific case of dynamic optimization). The most important condition of the designed solvers is that the dynamic constraints, formed by the equations of motion, must be satisfied for all time instances. To reach the research objectives, the theory and application of both interval arithmetic and interval analysis have been thoroughly investigated. The work is divided into two parts. The first part is on static optimization, which includes the discussion on interval arithmetic and describes the basics regarding interval analysis. The existing theory of inclusion functions, formed via interval arithmetic, has been evaluated and extended upon. The development of the Polynomial Inclusion Function, a new type of inclusion function, shows that significant improvements are possible in this field. During the review of interval analysis, its main virtues and limitations were demonstrated. The most important advantages are the guarantee that all optimal solutions are found to any degree of accuracy and that the user knows when the solution set has been found. The main limitation is the curse of dimensionality: the computational load grows, for most problems, exponentially with al linear increase in problem dimension. The author believes that this curse is mainly caused by two aspects of the current implementation of interval analysis. The first aspect is the widening of the inclusion function due to the dependency effects. The dependency effects can be partially prevented by efficient implementation of function evaluations and through application of advanced inclusion functions. However, a generic efficient method for preventing dependency effects is still not available. The other aspect causing the curse of dimensionality is the current inefficient handling of available information. The optimization algorithms within interval analysis are commonly based on branch and bound algorithms. Through a process of elimination, one is left with a list of domains in which the optimal solution set must lie. Current methods for eliminating (part of) the domain, such as the Newton step, do not use the gathered/available information efficiently. This is mainly due to the definition of the domain and the storage of the information, i.e. keeping track of infeasible regions. It is the author’s opinion that this is the reason that the application of interval analysis is limited to solving lower dimensional problems. Despite the curse of dimensionality, interval analysis based solvers can solve complicated, non-linear, constrained problems. This has been shown in multiple chapters in the first part. Complicated problems, such as neural network output optimization and the problem integer ambiguity resolution in the field of Global Navigation Satellite Systems, are solved rigorously by interval analysis based solvers. The applications show that equality and inequality constraints are efficiently handled using interval analysis. Moreover, they show that interval analysis can be used to solve real-life problems and demonstrate that interval analysis is a strong global optimization tool. The second part of the research is on dynamic optimization, thereby focusing on trajectory optimization. The trajectory optimization problem is infinite dimensional with begin and end-point constraints, dynamic constraints (the equations of motion), and possibly additional equality and inequality constraints. The problem is infinite dimensional since the states and controls need to be specified for each time instance. In the field of trajectory optimization one can identify two classes of methods: indirect methods and direct methods. Disregarding the optimization problems for which an analytic solution is present, both classes require a transformation to make the problem solvable. Three transformation methods have been considered: control parameterization, state parameterization, and control and state parameterization. With control parameterization, the control is defined for each time step using a polynomial and the states are computed using explicit integration. For state parameterization, the states are defined and the controls are deduced via the equations of motion (implicit integration). The last method applies parameterization of both the states and controls with respect to time. Trajectories are sought that satisfy the dynamic constraints at given time instances. The nature of the transformation methods implies that the first two methods can be used to find trajectories that satisfy the dynamic constraints at all time instances, while the latter cannot be used for this purpose. Therefore, only the first two methods have been thoroughly investigated. The last method was only briefly reviewed. The main conclusion regarding the control parameterization approach is that it suffers greatly from the required explicit integration. Although verified integration is possible and sharp bounds on the trajectories can be provided, the problem is to prove the existence of a solution within a given domain of the search space. Without the ability to update the estimate of the minimal cost function value early in the optimization process, the computational load becomes very high. Despite the drawback of control parameterization, it has been demonstrate that this approach can be used to find the global solution, although, currently, only very low dimensional problems can be solved. Higher dimensional problems can be solved using the state parameterization approach. By using simplex splines, the begin- and end-point constraints can be implicitly satisfied, which significantly reduces the problem complexity. The limitation is that the approach is only suitable for fully controllable systems. For systems that are not fully controllable one needs to apply explicit integration for all dependent states. This will increase the computational load significantly and would eliminate most of the benefits of the state parameterization approach. An interval analysis based solver has been applied to solve the problem of satellite trajectory planning for formation flying. Although still suffering from the curse of dimensionality, the results demonstrate that interval analysis can be used to solve the problem rigorously. Moreover, it has been shown that the performance of the solver is superior to gradient based solvers when constraints are imposed. The main conclusion of the research is that it is possible to apply interval analysis to dynamic optimization. The current status of the solvers (in this thesis and in literature) allows one to solve only ‘lower’ dimensional problems. Radical changes in the approach of handling information and keeping track of infeasible regions must be made to make interval analysis applicable to higher dimensional problems. Despite the limitations of interval analysis, the presented results clearly demonstrate the virtues of interval analysis based solvers in the field of global optimization. Several new exciting research opportunities have been identified, such as nonlinear stability analysis using interval analysis, the combination of interval analysis and evolutionary algorithms, and a new way of forming inclusion functions to boost the efficiency of interval analysis based solvers. Overall, the potential of interval analysis is very large and the author believes that interval analysis will become one of the most important tools in the field of global optimization in the near future.","interval analysis; optimization; dynamic","en","doctoral thesis","","","","","","","","2010-09-14","Aerospace Engineering","Control and Simulation Division","","","",""
"uuid:fdc2dbda-b419-450f-a305-64825a43a0c8","http://resolver.tudelft.nl/uuid:fdc2dbda-b419-450f-a305-64825a43a0c8","Global Optimization using Interval Analysis: Interval Optimization for Aerospace Applications","Van Kampen, E.","Mulder, J.A. (promotor)","2010","Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in adaptive control algorithms; and in system identification problems, such as online aircraft model identification or human perception modeling. The main goal of this thesis is to investigate how Interval Analysis (IA) can be used as a tool for aerospace related optimization problems; to examine its theoretical and practical limitations, and to explore the ways in which optimization algorithms can benefit from interval analysis. A subset of goals is to improve the solutions for a number of aerospace related optimization problems. The scientific contribution of this thesis consists of the design and implementation of interval optimization algorithms for four important aerospace problems. The first contribution concerns finding the trim points for a nonlinear aircraft model. Trim points, defined as the combination of control settings for which all linear and rotational accelerations on the aircraft are zero, are important for flight control system design, since they provide information about the flight envelope and stability properties of the aircraft. Unlike other trim algorithms, the interval based method can guarantee that all trim points are found. In the second application, an interval optimization algorithm is developed for fitting pilot input/output data from an experiment in the SIMONA Research Simulator to a multi-modal human perception model. Perception models improve the understanding of how humans perceive motion and are an essential tool in the design of flight simulators. Results show that the minimum of the cost function found by the interval method is lower than the one previously found, resulting in an improved human perception model. This second application particularly demonstrates the capabilities of IA optimization as a parameter identification tool. The third contribution is an interval based algorithm for solving the integer ambiguity problem related to Global Navigation Satellite Systems (GNSS). Phase measurements of the carrier wave of a GNSS signal are used to estimate the length and orientation of baselines between two or more antennas. This estimation procedure contains an optimization problem in which the integer number of carrier wavelengths between antennas has to be determined. The new interval method provides guarantees that correct solutions are found when the measurement noise is encapsulated by an interval number. The final contribution is an interval optimization algorithm that minimizes fuel consumption during rendezvous and docking procedures of satellites in circular orbits. To avoid integration of interval functions, an analytical solution to the system of differential equations that describes the relative motion of the satellites is used to generate trajectories resulting from a set of thruster pulses of varying amplitudes. Introduction of obstacles, in the form of forbidden areas in the path between the two satellites, makes the problem nonlinear, such that gradient-based optimization algorithms can fail to obtain the globally optimal solution. The interval algorithm always converges to the trajectory that avoids all obstacles and results in minimum fuel consumption. It can be concluded that IA is an excellent tool for solving nonlinear optimization algorithms, providing guarantees on obtaining the global minimum of the cost function.","optimization; interval analysis","en","doctoral thesis","","","","","","","","2010-09-24","Aerospace Engineering","Control and Simulation","","","",""
"uuid:f272117c-e1b5-4ae6-96cb-aa86fe62a015","http://resolver.tudelft.nl/uuid:f272117c-e1b5-4ae6-96cb-aa86fe62a015","Overview of Methods for Multi-Level and/or Multi-Disciplinary Optimization","De Wit, A.J.; Van Keulen, A.","","2010","Multi-level optimization and multi-disciplinary optimization are areas of research that are concerned with developing efficient analysis and optimization techniques for complex systems that are made up of coupled elements (components). Within the field of multilevel optimization and multi-disciplinary optimization a large number of techniques have been developed for efficient analysis and optimization of complex systems. This paper presents an unified overview of main stream approaches that were found in the literature. Four general steps are distinguished in both multi-level optimization and multi-disciplinary optimization: physical coupling, optimization problem coupling, coordination and solution sequence. Via these four steps approaches are classified and possibilities for combining aspects of different methods are given. Finally, advantages and disadvantages of approaches applied to engineering problems are discussed and directions for further research are given.","multi-level; multi-disciplinary; optimization; decomposition; coordination; overview","en","conference paper","American Institute of Aeronautics and Astronautics (AIAA)","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","",""
"uuid:319dffb8-3bbc-49de-a6c5-68d8972f3888","http://resolver.tudelft.nl/uuid:319dffb8-3bbc-49de-a6c5-68d8972f3888","A generic method to optimize instructions for the control of evacuations","Huibregtse, O.L.; Hoogendoorn, S.P.; Pel, A.J.; Bliemer, M.C.J.","","2010","A method is described to develop a set of optimal instructions to evacuate by car the population of a region threatened by a hazard. By giving these instructions to the evacuees, traffic conditions and therefore the evacuation efficiency can be optimized. The instructions, containing a departure time, a destination, and a route, are created using an optimization method based on ant colony optimization. Iteratively is searched for an approximation of the optimal evacuation instructions. The usefulness of the optimization method compared to other optimization methods is the simultaneous optimization of the departure time, destination, and route instructions instead of the optimization of only one or two of these variables for a dynamic instead of static evacuation problem. In a case study, the functioning of the method is illustrated. The relative high fitness in the case study of the set of instructions following from the optimization method compared with the fitness of a set of instructions set up by straightforward rules (like evacuating to the nearest destination) shows also the usefulness of applying an optimization method to create a set of evacuation instructions.","evacuation; instructions; control; optimization; ant colony optimization","en","conference paper","IFAC","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:1137ebe3-3dcb-43ca-84f7-89bbbbc2d635","http://resolver.tudelft.nl/uuid:1137ebe3-3dcb-43ca-84f7-89bbbbc2d635","Efficient particle-based estimation of marginal costs in a first-order macroscopic traffic flow model","Zuurbier, F.S.; Hegyi, A.; Hoogendoorn, S.P.","","2010","Marginal costs in traffic networks are the extra costs incurred to the system as the result of extra traffic. Marginal costs are required frequently e.g. when considering system optimal traffic assignment or tolling problems. When explicitly considering spillback in a traffic flow model, one can use a numerical derivative or resort to heuristics to calculate the marginal costs. Numerical derivatives are computationally demanding, restricting its use to simple networks. Heuristic approaches in most cases approximate the marginal costs by only considering the extra costs on the links which are traveled by the extra traffic, excluding the possibly external costs incurred on other links due to spillback. This paper proposes a novel way to estimate the true marginal costs of traffic in a dynamic discrete LWR model which correctly deals with congestion onset, spillback and dissolution. The proposed methodology tracks virtual changes in density through the network by means of particles which travel along with the characteristics of traffic. By using density based cost functions, the virtual changes in density can be directly related to the marginal costs. The computational efficiency of the methodology stems from the fact that only local conditions are considered when propagating the virtual change in density. The paper discusses the methodology and necessary model extensions, provides a numerical validation experiment illustrating the exact detail of the solution by comparison to a numerical derivative and discusses some generalizations.","optimization; dynamic traffic assignment; system optimal; LWR; marginal costs; particle","en","conference paper","IFAC","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:d8f58668-ba49-441d-bbf0-aa8c7114da4a","http://resolver.tudelft.nl/uuid:d8f58668-ba49-441d-bbf0-aa8c7114da4a","A Unified Approach towards Decomposition and Coordination for Multi-level Optimization","De Wit, A.J.","Van Keulen, A. (promotor)","2009","Complex systems, such as those encountered in aerospace engineering, can typically be considered as a hierarchy of individual coupled elements. This hierarchy is reflected in the analysis techniques that are used to analyze the physcial characteristics of the system. Consequently, a hierarchy of coupled models is to be used, accounting for different physical scales, components and/or disciplines. Numerical optimization of complex systems with embedded hierarchy is accomplished via multi-level optimization methods. Multi-level optimization methods utilize the hierarchical nature of complex systems to distribute the optimization process into smaller coupled less complex optimization problems located at the individual elements of the hierarchy. The present thesis presents a generalized approach towards decomposition and coordination for the numerical optimization of complex systems with embedded hierarchy. The developed methods are applied to numericaly maximizing the range of a supersonic business jet via multi-level optimization considering coupling between multiple engineering disciplines.","multi-level; multi-disciplinary; optimization; decomposition; coordination","en","doctoral thesis","","","","","","","","2009-11-30","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","",""
"uuid:25c85feb-7ef1-4752-9810-e70f49e88802","http://resolver.tudelft.nl/uuid:25c85feb-7ef1-4752-9810-e70f49e88802","On maximum field components in the focal point of a lens","Urbach, H.P.; Pereira, S.F.; Broer, D.J.","","2009","We determine field distributions in the pupil of a high NA lens, that give, for a given power incident on the lens, the maximum electric field amplitude in focus in a specific direction. We consider in particular the cases of maximum longitudinal and maximum transverse components. The distribution of the maximum longitudinal component in the focal plane is narrower than that of the focused Airy spot and hence can give higher resolution in imaging.","High NA; beam shaping; optimization; longitudinal polarization","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:dc5b1158-be54-42d6-a4d3-b0a19462f507","http://resolver.tudelft.nl/uuid:dc5b1158-be54-42d6-a4d3-b0a19462f507","Robustness of networks","Wang, H.","Van Mieghem, P. (promotor)","2009","Our society depends more strongly than ever on large networks such as transportation networks, the Internet and power grids. Engineers are confronted with fundamental questions such as “how to evaluate the robustness of networks for a given service?”, “how to design a robust network?”, because networks always affect the functioning of a service. Robustness is an important issue for many complex networks, on which various dynamic processes or services take place. In this work, we define robustness as follows: a network is more robust if the service on the network performs better, where performance of the service is assessed when the network is either (a) in a conventional state or (b) under perturbations, e.g. failures, virus spreadings etc. In this thesis, we survey a particular line of network robustness research within our general framework: robustness quantification, optimization and the interplay between service and network. Significant progress has been made in understanding the relationship between the structural properties of networks and the performance of the dynamics or services taking place on these networks. We assume that network robustness can be quantified by a topological measure of the network. A brief overview of the topological measures is presented. Each measure may represent the robustness of a network with respect to a certain performance aspect of a service. We focus on the measure known as algebraic connectivity. Evidence collected from literature shows that the algebraic connectivity characterizes network robustness with respect to synchronization of dynamic processes at nodes, random walks on graphs and the connectivity of a network. Moreover, we illustrate that, on a given diameter, graphs with large algebraic connectivity tend to be dense in the core and sparse at the border. Such structures distribute traffic homogeneously and are thus robust in terms of traffic engineering. How do we design a robust network with respect to the metric algebraic connectivity? First, the complete graph has the maximal algebraic connectivity, while its high link density makes it impractical to use due to the cost of constructing links. Constraints on other network features are usually set up to incorporate realistic requirements. For example, constraint on the diameter may guarantee certain end-to-end quality of service levels such as the delay. We propose a class of clique chain structures which optimize the algebraic connectivity and many other robust features among all graphs with diameter D and size N. The optimal graph within the class can be determined either analytically or numerically. Second, complete replacement of an existing infrastructure is expensive. Thus, we design strategies for robustness optimization using minor topological modifications. These strategies are evaluated in various classes of graphs. The robustness quantification, or equivalently, the association of the performance of a service with a topological measure, may be implicit. In this case, we explore the interplay between topology and service in determining the overall performance. Many services on communications and transportation networks are based on shortest path routing. The weight of a link, such as delay or bandwidth, is generally a metric optimized via shortest path routing. Thus, link weight tuning, a mechanism to control traffic, is also considered as part of the service. The interplay between service (shortest path routing and link weight tuning) and topology is investigated for the following performance aspects: (a) the structure of the transport overlay network, which is the union of shortest paths between all node pairs and (b) the traffic distribution in the overlay network. Important new findings are (i) the universal phase transition in overlay structures as we tune the link weight structure over different classes of networks and (ii) the power law traffic distribution in the overlay networks when link weights vary strongly in various classes of networks. Furthermore, we consider the service that measures a network topology as the union of shortest paths among a set of testboxes (nodes). The measured topology is a subgraph of the overlay network, which is again a subgraph of the actual network. The performance in terms of the sampling bias of measuring a network topology is investigated. Our work contributes substantially to a better understanding of the effect of the service (testbox selection) and the actual network structure on the performance with respect to sampling bias. Our investigations on the interplay between service and network reveal again the association between the performance of a service and certain topological feature, and thus, contribute to the quantification of network robustness. The multidisciplinary nature of this research lies not only in the presence of robustness issues in many complex networks, but also in that advances in other disciplines such as graph theory, combinatorics, linear algebra and statistical physics are widely applied throughout the thesis to study optimization problems and the performance of large networks.","robustness; network topology; service; optimization","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","",""
"uuid:c58b5999-da12-4a62-876f-95d7784edf91","http://resolver.tudelft.nl/uuid:c58b5999-da12-4a62-876f-95d7784edf91","Model-Based Control and Optimization of Large Scale Physical Systems - Challenges in Reservoir Engineering","Van den Hof, P.M.J.; Jansen, J.D.; Van Essen, G.M.; Bosgra, O.H.","","2009","Due to urgent needs to increase efficiency in oil recovery from subsurface reservoirs new technology is developed that allows more detailed sensing and actuation of multiphase flow properties in oil reservoirs. One of the examples is the controlled injection of water through injection wells with the purpose to displace the oil in an appropriate direction. This technology enables the application of model-based optimization and control techniques to optimize production over the entire production period of a reservoir, which can be around 25 years. Large scale reservoir flow models are used for optimizing production settings, but suffer from high levels of uncertainty and limited validation options. One of the challenges is the development of reduced complexity models that deliver accurate long-term predictions, and at the same time are not more complex than can be warranted by the amount of data that is available. In this paper an overview will be given of the problems and opportunities for model-based control and optimization in this field aiming at the development of a closed-loop reservoir management system.","petroleum; reservoir; optimization","en","conference paper","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:cb3de0cf-a506-4490-b988-f4d1bf00ae55","http://resolver.tudelft.nl/uuid:cb3de0cf-a506-4490-b988-f4d1bf00ae55","Model-based predictive control applied to multi-carrier energy systems","Arnold, M.; Negenborn, R.R.; Andersson, G.; De Schutter, B.","","2009","The optimal operation of an integrated electricity and natural gas infrastructure is investigated. The couplings between the electricity system and the gas system are modeled by so-called energy hubs, which represent the interface between the loads on the one hand and the transmission infrastructures on the other. To increase reliability and efficiency, storage devices are present in the multi-carrier energy system. In order to optimally incorporate these storage devices in the operation of the infrastructure, the capacity constraints and dynamics of these have to be taken into account explicitly. Therefore, we propose a model predictive control approach for controlling the system. This controller takes into account the present constraints and dynamics, and in addition adapts to expected changes of loads and/or energy prices. Simulations in which the proposed scheme is applied to a three-hub benchmark system are presented.","optimal power flow; electric power systems; model predictive control; natural gas systems; optimization","en","conference paper","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:ff8e44db-72e2-49fa-bd7f-bde923758e68","http://resolver.tudelft.nl/uuid:ff8e44db-72e2-49fa-bd7f-bde923758e68","An efficient method for reducing the sound speed induced errors in multibeam echosounder bathymetric measurements","Snellen, M.; Siemes, K.; Simons, D.G.","","2009","Nowadays extensive use is made of multibeam echosounders (MBES) for mapping the bathymetry of sea- and river-floors. The MBES is capable of covering large areas in limited time by emitting an acoustic pulse along a wide swathe perpendicular to the sailing direction. The angle and the corresponding two-way travel-time of the received signals are determined through beamsteering at reception. Water depths along the swathe can be derived from this angle and travel-time combination. In general, two sets of sound speed measurements are taken when conducting MBES measurements. The first set is used for the beamsteering and consists of the sound speeds at the MBES transducer. The second set is used for determining the propagation of the sound through the water column, needed for correctly converting the measured travel times to a depth. In general, this set of sound speed measurements consist of the complete sound speed profiles (SSPs). The quality of the sound speed measurements at the transducer position sometimes gets degraded, resulting in beam steering angles that differ from those aimed for. Also sometimes the SSPs used for converting the beam travel times to depths deviate from the true prevailing SSPs due to the, in general, limited amount of SSP measurements taken during a survey. Both above mentioned effects result in an erroneous bathymetry. Here, we present a method for eliminating these errors, without the need for additional sound speed information.","multibeam echosounder; sound speed profile; optimization","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","",""
"uuid:fbc64a39-931e-4b40-8803-486466f20703","http://resolver.tudelft.nl/uuid:fbc64a39-931e-4b40-8803-486466f20703","The potential of inverting geo-technical and geo-acoustic sediment parameters from single-beam echo sounder returns","Simons, D.G.; Snellen, M.; Siemes, K.","","2009","Seafloor characterization is important in many fields including hydrography, marine geology, coastal engineering and habitat mapping. The advantage of non-invasive acoustic methods for sediment characterization over conventional bottom grabbing is the nearly continuous versus sparse sensing and the enormous reduction in survey time and costs. Among the various acoustic systems for seafloor characterization, the single-beam echo sounder is of particular interest due to its simplicity and versatility. Seafloor characterization algorithms can be roughly divided into two categories: model-based and empirical, where the latter simply relies on the observation that certain echo features, such as amplitude, duration and skewness of the echo, are correlated with sediment type. Here we apply the model-based approach where we compare the measured echo signal with theoretically modeled echo envelopes in the time domain. For modeling the received echo sounder signals use is made of a physical backscatter model that fully accounts for watersediment interface roughness and sediment volume scattering. We use differential evolution, a fast variant of a genetic algorithm, as the global optimization method to invert the model input parameters mean grain size, spectral strength of the interface roughness and volume scattering cross section. In the model grain size determines geo-acoustic parameters like sediment sound speed, density and attenuation. The analysis is applied to simulated data.","single-beam echosounder; seafloor classification; optimization","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","",""
"uuid:6c6197bd-5757-428a-9d3d-e94af148ce90","http://resolver.tudelft.nl/uuid:6c6197bd-5757-428a-9d3d-e94af148ce90","A systematic analysis of the optical merit function landscape: Towards improved optimization methods in optical design","Van Turnhout, M.","Urbach, H.P. (promotor); Bociort, F. (promotor)","2009","A major problem in optical system design is that the optical merit function landscape is usually very complicated, especially for complex design problems where many minima are present. Finding good new local minima is then a difficult task. We show however that a certain degree of order is present in the optical design space, which is best observed when we consider not only local minima, but saddle points as well. With a special method, which we call Saddle-Point Construction (SPC), saddle points can be constructed in a simple way. Via saddle points, new local minima can be obtained very rapidly. When using a local optimization method, the final design after optimization highly depends on the starting configuration. We can group the initial configurations that lead to a given local minimum after local optimization into a graphical region, which shape depends on the optimization method used. However, saddle points are critical points in the merit function landscape that always remain on the boundaries, independent of the used optimization method. When the local optimization process is not chaotic, the geometric decomposition of the space of initial configurations into discrete regions has boundaries given by simple curves. But when the optimization is chaotic, the curves separating the different regions are very complicated objects termed fractals. In such cases, starting configurations, which are very close to each other, lead to different local minima after optimization. A better understanding of these instabilities can be obtained by using low damping values in a damped least-squares method.","optical system design; saddle point; optimization; fractal; chaos","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:4f491cc5-cdc7-49b4-8b80-700dae2cf57c","http://resolver.tudelft.nl/uuid:4f491cc5-cdc7-49b4-8b80-700dae2cf57c","Validity improvement of evolutionary topology optimization: Procedure with element replaceable method","Zhu, J.; Zhang, W.; Bassir, D.H.","","2009","The aim of this paper is to enhance the validity of existing evolutionary topology optimization procedures. As this hard-killing scheme related to the element sensitivity values may lead to incorrect predictions of inefficient elements to be removed and the value of the objective function becomes sharply deteriorated during the iterations, a check position (CP) control is proposed to prevent the erroneous topology design generated by the rejection criteria of evolutionary methods. For this purpose, we introduce a sort of orthotropic cellular microstructure (OCM) element with moderate pseudodensity that acts as a compromising element between solid element and void OCM element. In this way, all inefficient elements removed previously are automatically replaced with the moderate OCM elements depending upon the deterioration of the objective function. Erroneously removed elements are then identified in the updated finite element model through a direct sensitivity computing of the moderate OCM elements and will be finally recovered by the bi-directional element replacement. Besides, detailed structures with checkerboard patterns are eliminated by controlling the local structural bandwidth with the so-called threshold method. Typical optimization examples of structural compliance and natural frequency that were difficult to tackle are solved by the proposed design procedure. Satisfactory numerical results are obtained.","optimization; evolutionary method; erroneous design; check position control; moderate microstructure","en","journal article","EDP sciences","","","","","","","","Aerospace Engineering","Aerospace Structures","","","",""
"uuid:ff66e490-db59-4e3c-b6e2-926da4f074df","http://resolver.tudelft.nl/uuid:ff66e490-db59-4e3c-b6e2-926da4f074df","Algebraic Connectivity Optimization via Link Addition","Wang, H.; Van Mieghem, P.","","2008","","algebraic connectivity; synchronization; optimization; link addition","en","conference paper","ICST","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:a8ec762b-8e2a-422f-9978-a6e85673df40","http://resolver.tudelft.nl/uuid:a8ec762b-8e2a-422f-9978-a6e85673df40","Understanding catchment behaviour through model concept improvement","Fenicia, F.","Savenije, H.H.G. (promotor)","2008","This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of modelling and experimenting. We show a number of case studies where we demonstrate the need of combining the power of physical laws and established scientific theories with qualitative understanding of natural phenomena, which requires creativity and intuition. We emphasize the importance of the 'Art' of modelling, which is often a neglected aspect of scientific research. We address topical research issues such as reducing model structural uncertainty through progressive understanding of catchment behaviour, incorporating process knowledge in the different stages of model development, linking modelling and experimentation, and understanding the contribution of data to process understanding.","hydrological modelling; calibration; optimization; uncertainty; model structure","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:f16b0c66-bef3-46f9-a84c-174c0e0bc449","http://resolver.tudelft.nl/uuid:f16b0c66-bef3-46f9-a84c-174c0e0bc449","Saddle-point construction in the design of lithographic objectives, part 1: Method","Marinescu, O.; Bociort, F.","","2008","","saddle point; lithography; optimization; optical system design; EUV; DUV","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:7cd0b27c-f95b-47c3-969b-36c4b7affa0d","http://resolver.tudelft.nl/uuid:7cd0b27c-f95b-47c3-969b-36c4b7affa0d","Saddle-point construction in the design of lithographic objectives, part 2: Application","Marinescu, O.; Bociort, F.","","2008","","saddle point; lithography; optimization; optical system design; EUV; DUV","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:324e0e8a-527e-43bb-87c0-8e131654acc9","http://resolver.tudelft.nl/uuid:324e0e8a-527e-43bb-87c0-8e131654acc9","Performance Enhancement of Abrasive Waterjet Cutting","","Karpuschewski, B. (promotor)","2008","Abrasive Waterjet (AWJ) Machining is a recent non-traditional machining process. This technology is widely used in industry for cutting difficult-to-machine-materials, milling slots, polishing hard materials etc. AWJ machining has many advantages, e.g. it can cut net-shape parts, no heat is generated during the cutting process, it is particularly environmentally friendly as it is clean and it does not create dust. Although AWJ machining has many advantages, a big disadvantage of this technology is its relatively high cutting cost. Consequently, the reduction of the machining cost and the increase of the profit rate are big challenges in AWJ technology. To reduce the total cutting cost as well as to increase the profit rate, this research focuses on performance enhancement of AWJ cutting with two possible solutions including optimization in the cutting process and abrasive recycling. The first solution to enhance the AWJ cutting performance is the optimization of the AWJ cutting process. As a precondition, it is necessary to have a cutting process model for optimization. In order to use that model for this purpose, several important requirements are given. The most important requirement for such a model is that it can describe the âoptimum relationâ between the optimum abrasive mass flow rate and the maximum depth of cut. To develop a cutting process model which can be used for the AWJ optimization, many available models have been analyzed. Since the most important requirement for a process model (see above) can be obtained from Hoogstrate's model, an extension of this model is carried out. The extension model consists of three sub-models including pure waterjet model, abrasive waterjet model and abrasive-work material interaction model. The extension cutting process model is more accurate than the original one and it is capable to optimize AWJ systems. The influence of many process parameters, the work materials, the abrasive type and size have been taken into account. Up to now, there has not been a model for the prediction of AWJ nozzle wear. Therefore, modeling the nozzle wear rate has been carried out and a model for the wear rate of nozzles made from composite carbide has been proposed. Based on the extension cutting process model, two types of optimization applications have been carried out. They are related to technical problems and economical problems. From the results of these problems, regression models for determining the optimum nozzle exchange diameter and the optimum abrasive mass flow rate for various objectives have been proposed. The other solution to enhance the cutting performance is abrasive recycling. In this study, GMA garnet, the most popular abrasives for blast cleaning and waterjet cutting, has been chosen for the investigation. The recycling of GMA abrasives has been investigated on both technical side and economical side. On the technical side, the reusability and the cutting performance of the recycled and recharged abrasives have been analysed. The influence of the recycled and recharged abrasives on the cutting quality was studied. On the economical side, first, the prediction of the cost of recycled and recharged abrasives was done. Then, the economic comparisons for selecting abrasives have been carried out. In addition, the economics of cutting with recycled and recharged abrasives have been studied. Several suggestions for an abrasive recycling process which promises a more effective use of the grains have been proposed. By optimization in the cutting process and by abrasive recycling, the cutting performance can be increased, the total cutting cost can be reduced, and the profit rate can be enlarged considerably. Consequently, the performance of AWJ cutting can be enhanced significantly.","abrasive waterjet; waterjet; optimization; abrasive recycling; modeling","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:20b5a4b5-6419-4593-a668-48074982bcb3","http://resolver.tudelft.nl/uuid:20b5a4b5-6419-4593-a668-48074982bcb3","Model-based lifecycle optimization of well locations and production settings in petroleum reservoirs","Zandvliet, M.J.","Bosgra, O.H. (promotor); Jansen, J.D. (promotor)","2008","The coming years there is a need to increase production from petroleum reservoirs, and there is an enormous potential to do so by increasing the recovery factor. This is possible by making better use of recent technological developments, such as horizontal wells, downhole valves and sensors. However, actually making better use of these improved capabilities is difficult because of many open problems in reservoir management and production operations processes. Consequently, there is significant scope to increase the recovery factor of oil and gas fields by tailoring tools from the systems and control community to efficiently perform dynamic optimization of wells (e.g. number, locations) and their production settings (e.g. bottom-hole pressures, flow rates, valve settings) based on uncertain reservoir models, in the sense that they lead to good decisions while requiring limited time from the user. This thesis aims at developing these tools, and the main contributions are as follows. Many production setting optimization problems can be written as optimal control problems that are linear in the control. If the only constraints are upper and lower bounds on the control, these problems can be expected to have pure bang-bang optimal solutions. The adjoint method to derive gradients of a cost function with respect to production settings can be combined with robust optimization to efficiently compute settings that are robust against uncertainty in reservoir models. The gradients used in production setting optimization can be used to efficiently compute directions in which to iteratively improve upon an initial well configuration by surrounding the to-be-placed wells by pseudo wells (i.e. wells that operate at a negligible rate). The controllability and observability properties of single-phase flow reservoir model are analyzed. It is shown that pressures near wells in which we can control the flow rate or bottom-hole pressure are controllable, whereas pressures near wells in which we can measure the flow rate or bottom-hole pressure are observable. Finally, a new method of regularization in history matching is presented, based on this controllability and observability analysis.","petroleum; reservoir engineering; systems and control; optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:4f4b7fb1-4a77-46bb-9c14-ff5e4bb6477c","http://resolver.tudelft.nl/uuid:4f4b7fb1-4a77-46bb-9c14-ff5e4bb6477c","Optimization of extreme ultraviolet mirror systems comprising high-order aspheric surfaces","Marinescu, O.; Bociort, F.","","2008","","mirror systems; aspheres; extreme ultraviolet lithography; optimization; relaxation","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:5feb9aa6-d1bc-482b-8570-7e892bdf3bc5","http://resolver.tudelft.nl/uuid:5feb9aa6-d1bc-482b-8570-7e892bdf3bc5","Optimization based image registration in the presence of moving objects","Karimi Nejadasl, F.; Gorte, B.G.H.; Hoogendoorn, S.P.; Snellen, M.","","2008","","registration; optimization; Differential Evolution; Nelder-Mead; 3D Euclidean","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","",""
"uuid:d50848b4-cd08-4482-a824-7d51700be44e","http://resolver.tudelft.nl/uuid:d50848b4-cd08-4482-a824-7d51700be44e","Integrated modeling of ozonation for optimization of drinking water treatment","van der Helm, A.W.C.","van Dijk, J.C. (promotor)","2007","Drinking water treatment plants automation becomes more sophisticated, more on-line monitoring systems become available and integration of modeling environments with control systems becomes easier. This gives possibilities for model-based optimization. In operation of drinking water treatment plants, the processes are usually optimized individually on the basis of ""rules of thumb"" and operator knowledge and experience. However, changes in operational conditions of individual processes can affect subsequent processes and an optimal operation, which can include a number of water quality parameters, costs and environmental impact is different for every operator. Improvement of the operation of a drinking water treatment plant is possible by using an integrated model of the entire water treatment plant as an instrument for operational support and for process control. For this purpose, it is important that explicit objectives are defined for the operation. From the research it is concluded that the objective for integrated optimization of the operation of drinking water treatment should be the improvement of water quality and not a priori reduction of environmental impact or costs. In the research an integrated model for ozonation, including ozone decay, bromate formation, assimilable organic carbon (AOC) formation, E. coli disinfection, CT and decrease in UV absorbance at 254 nm (UVA254) is developed. With the model, different control strategies for ozonation are assessed. The research also describes a newly developed design for ozone installations, the dissolved ozone plug flow reactor, (DOPFR) and the effect of character and removal of natural organic matter (NOM) prior to ozonation. The research was carried out as part of the project Promicit, a cooperation of Waternet, Delft University of Technology, DHV B.V. and ABB B.V. and was subsidized by SenterNovem, agency of the Dutch Ministry of Economic Affairs. Part of the experiments was performed in cooperation with Kiwa Water Research.","modeling; modelling; integrated; ozonation; optimization; drinking water; drinking water treatment; bromate; natural organic matter; nom; disinfection; assimilable organic carbon; aoc; life cycle assessment; lca; bottled water","en","doctoral thesis","Water Management Academic Press","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:c05ad7d6-5504-4fa4-a14f-496e9bb20928","http://resolver.tudelft.nl/uuid:c05ad7d6-5504-4fa4-a14f-496e9bb20928","Predictability and unpredictability in optical system optimization","Van Turnhout, M.; Bociort, F.","","2007","Local optimization algorithms, when they are optimized only for speed, have in certain situations an unpredictable behavior: starting points very close to each other lead after optimization to different minima. In these cases, the sets of points, which, when chosen as starting points for local optimization, lead to the same minimum (the so-called basins of attraction), have a fractal-like shape. Before it finally converges to a local minimum, optimization started in a fractal region first displays chaotic transients. The sensitivity to changes in the initial conditions that leads to fractal basin borders is caused by the discontinuous evolution path (i.e. the jumps) of local optimization algorithms such as the damped-least-squares method with insufficient damping. At the cost of some speed, the fractal character of the regions can be made to vanish, and the downward paths become more predictable. The borders of the basins depend on the implementation details of the local optimization algorithm, but the saddle points in the merit function landscape always remain on these borders.","optimization; optical system design; saddle points; fractals; basins of attraction","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:28b2169c-2dc0-4258-b572-8c2320cf81d1","http://resolver.tudelft.nl/uuid:28b2169c-2dc0-4258-b572-8c2320cf81d1","Practical guide to saddle-point construction in lens design","Bociort, F.; Van Turnhout, M.; Marinescu, O.","","2007","Saddle-point construction (SPC) is a new method to insert lenses into an existing design. With SPC, by inserting and extracting lenses new system shapes can be obtained very rapidly, and we believe that, if added to the optical designer’s arsenal, this new tool can significantly increase design productivity in certain situations. Despite the fact that the theory behind SPC contains mathematical concepts that are still unfamiliar to many optical designers, the practical implementation of the method is actually very easy and the method can be fully integrated with all other traditional design tools. In this work we will illustrate the use of SPC with examples that are very simple and illustrate the essence of the method. The method can be used essentially in the same way even for very complex systems with a large number of variables, in situations where other methods for obtaining new system shapes do not work so well.","optical system design; optimization; saddle points","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:703cd3c2-8cf4-48f7-babc-8b33cdd38949","http://resolver.tudelft.nl/uuid:703cd3c2-8cf4-48f7-babc-8b33cdd38949","Optimization technique for ED&PE","Kumar, P.; Bauer, P.","","2007","","optimization; BLDC drive","en","conference paper","Tulip","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:8eff9ef1-b509-4f3d-b1f7-7d1357c53ff8","http://resolver.tudelft.nl/uuid:8eff9ef1-b509-4f3d-b1f7-7d1357c53ff8","Structured controller synthesis for mechanical servo-systems: Algorithms, relaxations and optimality certificates","Hol, C.W.J.","Scherer, C.W. (promotor); Bosgra, O.H. (promotor)","2006","In many application areas of mechanical servo-systems the high demands on the performance often imply a tightly tuned feedback controller, that takes dynamical interaction into account. Model-based H-optimal controller synthesis is a well-suited technique for this purpose. However, the state-of-the-art synthesis approach yields controllers with high McMillan degree that can not be implemented in real-time at high sampling-rates, because of the limited computational capacity. This motivates to constrain the McMillan degree of the controller. The aim of this thesis is to provide numerical tools for H-optimal degree constrained (or otherwise structured) controller synthesis. For this problem we have developed relaxations that are based on Sum-Of-Squares polynomials. Their optimal values are lower bounds on the globally optimal structured controller synthesis problem and can be computed by solving LMI problems. It is guaranteed, that the bounds converge to best achievable performance as we improve our relaxations. To make this technique feasible for plants with high McMillan degree, we proposed a computationally less demanding scheme based on partial dualization. The Sum-Of-Squares relaxations have also been applied to robust polynomial Semi-Definite Programs (SDPs). Also for this case a sequence of relaxations has been developed, whose optimal values converge from below to the optimal value of the robust SDP. Furthermore for the structured controller synthesis problem an Interior Point algorithm has been developed. It is shown how this algorithm can be made more efficient, by exploiting the control-theoretic characteristics of the problem. Conditions have been derived to verify local optimality of the optimized controller. Finally, it has been illustrated by real-time experiments that the algorithms described in this thesis can be used to synthesize high-performing fixed-order controllers for a new prototype of a wafer stage.","controller synthesis; static output feedback; optimization; sumof-squares; matrix inequalities; bmi; lmi; interior point","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:cdc345d1-a0b5-4b70-98fb-bc2235c818a6","http://resolver.tudelft.nl/uuid:cdc345d1-a0b5-4b70-98fb-bc2235c818a6","Application of sonic boom optimization to supersonic aircraft design","Daumas, L.; Dinh, Q.V.; Kleinveld, S.; Rogé, G.","","2006","Preliminary results on shape optimization of a wing-body configuration aiming at reducing sonic boom overpressure will be discussed. The optimization process uses a CAD modeler and an Euler CFD code with adjoint. Thickness, scale, twist and camber at section level were used to obtain gains in ground pressure signature.","adjoint; CAD modeller; optimization; sonic boom; supersonic aircraft design","en","conference paper","","","","","","","","","","","","","",""
"uuid:63a75aa9-c71e-4439-9d0b-864fe8c2915d","http://resolver.tudelft.nl/uuid:63a75aa9-c71e-4439-9d0b-864fe8c2915d","A continuous adjoint formulation with emphasis to aerodynamic-turbomachinery optimization","Papadimitriou, D.I.; Giannakoglou, K.C.","","2006","This paper summarizes progress, recently made in the Lab. of Thermal Turbomachines of NTUA, on the formulation and use of the continuous adjoint methods in aerodynamic shape optimization problems. The basic features of state of the art adjoint methods and tools which are capable of handling arbitrary objective functions, cast in the form of either boundary or field integrals, are presented. Starting point of the presentation is the formulation of the continuous adjoint method for arbitrary integral objective functionals in problems governed by arbitrary, linear or nonlinear, first or second order state pde's; the scope of this section is to demonstrate that the proposed formulation is general without being restricted to aerodynamics. It is noticeable that, regardless of the type of functional (field of boundary integral) the expressions of its gradient with respect to the design variables include boundary integrals only. Thus, the derived adjoints can be used with either structured or unstructured grids and there is no need for repetitive remeshing or computation of field integrals which increase the CPU cost and deteriorate the computational accuracy. Then, the presentation focuses on aerodynamic shape optimization problems governed by the compressible fluid flow equations, numerically solved through a time-marching formulation and an upwind discretization scheme for the convection terms. Two design problems, namely the inverse design of a 2D cascade at inviscid flow conditions (used as a test bed for the assessment of three descent algorithms based on the same gradient information) and the design optimization of a 3D peripheral compressor cascade for minimum viscous losses are presented. For the latter, the flow is turbulent and the field integral of entropy generation, recently proposed by the same authors, is used as objective function.","continuous adjoint; inverse design; optimization; losses minimization; turbomachines","en","conference paper","","","","","","","","","","","","","",""
"uuid:11464f49-b10b-48ed-9075-9e281514618a","http://resolver.tudelft.nl/uuid:11464f49-b10b-48ed-9075-9e281514618a","Analytical and Numerical Developments in Optimal Shape Design for Aerospace: An overview","Pironneau, O.","","2006","","optimization; optimal shape design; gradient methods; finite element methods","en","conference paper","","","","","","","","","","","","","",""
"uuid:8b3c60a5-4e17-4680-b7c6-252fb4ae87ca","http://resolver.tudelft.nl/uuid:8b3c60a5-4e17-4680-b7c6-252fb4ae87ca","VIVACE: Multidisciplinary Decision Support","Homsi, P.","","2006","","collaboration; multidisciplinary; optimization; decision; knowledge; data management; virtual enterprise; aeronautic; aircraft; engine","en","conference paper","","","","","","","","","","","","","",""
"uuid:fc982426-38af-4ba7-bc57-c3e44f14c4c6","http://resolver.tudelft.nl/uuid:fc982426-38af-4ba7-bc57-c3e44f14c4c6","Aerodynamic optimization of an airfoil using gradient based method","Mirzaei, M.; Roshanian, J.; Nasrin Hosseini, S.","","2006","A gradient based method is presented for optimization of an airfoil configuration. The flow is governed by two dimensional, compressible Euler equations. A finite volume code based on unstructured grid is developed to solve the equations. The procedure is carried out for optimizing an airfoil with initial configuration of NACA 0012. The advantage of this technique over the other gradient based methods is its speed of converging.","CFD; optimization; gradient; objective function; design variables","en","conference paper","","","","","","","","","","","","","",""
"uuid:8abc533d-b860-46c1-8868-5eabdb33e415","http://resolver.tudelft.nl/uuid:8abc533d-b860-46c1-8868-5eabdb33e415","Partitioned strategies for optimization in FSI","Bletzinger, K.U.; Gallinger, T.; Kupzok, A.; Wüchner, R.","","2006","In this paper the possibility of the optimization of coupled problems in partitioned approaches is discussed. As a special focus, surface coupled problems of fluid-structure interaction are considered. Well established methods of optimization are analyzed for usage in the context of coupled problems and in particular for a solution through partitioned approaches. The main benefits expected from choosing a partitioned solution strategy as basis for the optimization are: a high flexibility in the usage of different solvers and therefore different approaches for the single-field problems as well as the possibility to apply well tested and sophisticated methods for the modeling of complex problems.","optimization; coupled problems; fluid-structure interaction; partitioned approach","en","conference paper","","","","","","","","","","","","","",""
"uuid:197e6db7-921d-4786-958d-b0c06079f1fc","http://resolver.tudelft.nl/uuid:197e6db7-921d-4786-958d-b0c06079f1fc","Realistic high-lift design of transport aircraft by applying numerical optimization","Wild, J.; Brezillon, J.; Mertins, R.; Quagliarella, D.; Germain, E.; Amoignon, O.; Moens, F.","","2006","The design activity within the EUROLIFT II project is targeted towards an improvement of the take-off performance of a generic transport aircraft configuration by a re-design of the trailing edge flap. The involved partners applied different optimization strategies as well as different types of flow solvers in order to cover a wide range of possible approaches for aerodynamic design optimization. The optimization results obtained by the different partners have been cross-checked in order to eliminate solver dependencies and to identify the best obtained design. The final selected design has been applied to the wind tunnel model and the test in the European Transonic Wind Tunnel (ETW) at high Reynolds number confirms the predicted improvements.","optimization; high-lift; application; CFD; wind tunnel testing","en","conference paper","","","","","","","","","","","","","",""
"uuid:ea7af067-bd46-48c8-a147-fe4cddc936ec","http://resolver.tudelft.nl/uuid:ea7af067-bd46-48c8-a147-fe4cddc936ec","Looking for order in the optical design landscape","Bociort, F.; Van Turnhout, M.","","2006","In present-day optical system design, it is tacitly assumed that local minima are points in the merit function landscapewithout relationships between them. We will show however that there is a certain degree of order in the design landscapeand that this order is best observed when we change the dimensionality of the optimization problem and when weconsider not only local minima, but saddle points as well. We have developed earlier a computational method fordetecting saddle points numerically, and a method, then applicable only in a special case, for constructing saddle points by adding lenses to systems that are local minima. The saddle point construction method will be generalized here and wewill show how, by performing a succession of one-dimensional calculations, many local minima of a given global searchcan be systematically obtained from the set of local minima corresponding to systems with fewer lenses. As a simpleexample, the results of the Cooke triplet global search will be analyzed. In this case, the vast majority of the saddlepoints found by our saddle point detection software can in fact be obtained in a much simpler way by saddle point construction, starting from doublet local minima.","saddle point; optimization; optical system design; lithography","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","",""
"uuid:cdd281b2-0bc7-4f57-a9fb-3ddbe49c1082","http://resolver.tudelft.nl/uuid:cdd281b2-0bc7-4f57-a9fb-3ddbe49c1082","Designing lithographic objectives by constructing saddle points","Marinescu, O.; Bociort, F.","","2006","Optical designers often insert or split lenses in existing designs. Here, we present, with examples from Deep and Extreme UV lithography, an alternative method that consists of constructing saddle points and obtaining new local minima from them. The method is remarkable simple and can therefore be easily integrated with the traditional design techniques. It has significantly improved the productivity of the design process in all cases in which it has been applied so far.","saddle point; lithography; optical system design; optimization; DUV; EUV","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:b842a4d0-0708-4c37-b3e7-e86f91c72dd4","http://resolver.tudelft.nl/uuid:b842a4d0-0708-4c37-b3e7-e86f91c72dd4","Challenges for process system engineering in infrastructure operation and control","Lukszo, Z.; Weijnen, M.P.C.; Negenborn, R.R.; De Schutter, B.; Ilic, M.","","2006","The need for improving the operation and control of infrastructure systems has created a demand on optimization methods applicable in the area of complex sociotechnical systems operated by a multitude of actors in a setting of decentralized decision making. This paper briefly presents main classes of optimization models applied in PSE system operation, explores their applicability in infrastructure system operation and stresses the importance of multi-level optimization and multi-agent model predictive control. If you want to cite this report, please use the following reference instead: Z. Lukszo, M.P.C. Weijnen, R.R. Negenborn, B. De Schutter, and M. Ilic, “Challenges for process system engineering in infrastructure operation and control,” in 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering (Garmisch-Partenkirchen, Germany, July 2006) (W. Marquardt and C. Pantelides, eds.), vol. 21 of Computer-Aided Chemical Engineering, Amsterdam, The Netherlands: Elsevier, ISBN 978-0-444-52969-5, pp. 95–100, 2006.","infrastructures; optimization; multi-agent systems; model predictive control","en","report","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","http://resolver.tudelft.nl/uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","Model reduction for dynamic real-time optimization of chemical processes","Van den Berg, J.","Bosgra, O.H. (promotor)","2005","The value of models in process industries becomes apparent in practice and literature where numerous successful applications are reported. Process models are being used for optimal plant design, simulation studies, for off-line and online process optimization. For online optimization applications the computational load is a limiting factor. The focus of this thesis is on nonlinear model approximation techniques aiming at reduction of computational load of a dynamic real-time optimization problem. Two types of model approximation methods were selected from literature and assessed within a dynamic optimization case study: model reduction by projection and physics-based model reduction. Model order reduction by projection is partially successful. Even with a strongly reduced number of transformed differential equations it is possible to compute acceptable approximate solutions. Projection does not provide predictable results in terms of simulation error and stability and does not reduce the computational load of simulation. On the other hand, physics-based model reduction appeared to be very successful in reducing the computational load of the sequential dynamic optimization problem.","chemical processes; model reduction; optimization","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:a29ca0b4-c17d-4a14-99c0-9672b805021e","http://resolver.tudelft.nl/uuid:a29ca0b4-c17d-4a14-99c0-9672b805021e","Uncertainty-based Design Optimization of Structures with Bounded-But-Unknown Uncertainties","Gurav, S.P.","van Keulen, A. (promotor)","2005","","uncertainty; optimization; response surface; parallel computing; MEMS","en","doctoral thesis","Delft University Press","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:7bf2a037-c8eb-44be-96ef-411529c4be0b","http://resolver.tudelft.nl/uuid:7bf2a037-c8eb-44be-96ef-411529c4be0b","Topology Optimization using a Topology Description Function Approach","de Ruiter, M.J.","van Keulen, F. (promotor)","2005","During the last two decades, computational structural optimization methods have emerged, as computational power increased tremendously. Designers now have topological optimization routines at their disposal. These routines are able to generate the entire geometry of structures, provided only with information on loads, supports, and space to work in. The most common way to do this is to partition the available space in elements, and to determine the material content of each of the elements separately. This thesis presents a different approach, namely the \emph{Topological Description Function} (TDF) approach. The TDF is a function parametrized by design variables. The function determines a geometry using a level-set approach. A finite element representation of the geometry then is used to determine how well the geometry performs with respect to objective and constraints. This information is given to an optimization program, which has the purpose of finding an optimal combination of values for the design variables. This approach decouples the geometry description of the design from the evaluation, allowing the designer to tune the detailedness of the geometry and the computational grid separately as wished. In this thesis, the concept of a TDF is explained in detail. Using a genetic algorithm for the optimization turns out to be too computationally expensive, however, it shows the validity of the TDF as a geometry description method. A method based on an intuitive updating scheme shows that the TDF approach can be used to do topology optimization.","level set method; topology; optimization; tdf; topology description function; genetic algorithm; optimality criteria method; structural optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:33282f5f-e093-4a9a-88e8-819ccfb40114","http://resolver.tudelft.nl/uuid:33282f5f-e093-4a9a-88e8-819ccfb40114","Model-based optimization of the operation procedure of emulsification","Stork, M.","Bosgra, O.H. (promotor)","2005","Emulsions are widely encountered in the food and cosmetic industry. The first food we consume is an emulsion, namely breast milk. Other common emulsions are mayonnaise, dressings, skin creams and lotions. Equipment often used for the production of oil-in-water emulsions in the food industry consists of a stirred vessel in combination with a colloid mill and a circulation pipe. Within this set-up there are two main variations: i) Configuration I where the colloid mill acts like a shearing device and at the same time as a pump. This configuration is used in the majority of the production facilities, and ii) Configuration II where the shearing and pumping action are not coupled. The operation procedure for obtaining a certain predefined emulsion quality is often established based on experience (best practice). This is most probably time-consuming (e.g. large experimental efforts for new developed products) and it is also unclear if the process is operated at its optimum (e.g. in minimum time). An other drawback is that there is no feedback during the production process. Hence, it is not possible to deal with disturbances acting on the process. A possible consequence is that, at the end of the production process, the product quality specifications are not met and the product has to be classified as off-spec. In order to be able to enlarge the efficiency of the production processes and to shorten the time to market of new products - and therewith create an advantage over competition - it is necessary to overcome these limitations of the current operation procedure. In the work reported a first step is set into this direction. A model describing the droplet size distribution (DSD) and the emulsion viscosity as function of the time was developed and several off-line optimization studies were performed. The model comprises several fit parameters and experiments were performed in order to estimate the values of these parameters. A number of additional experiments were performed to compare the simulated results with the measurements (model validation). The results of the parameter estimation and the model validation show that the simulated results are qualitatively in good agreement with the measurement data. Given the overall performance of the model it is expected that the model quality is sufficient to render practical relevant optimization results. Although the optimization studies have been performed for a model emulsion, small scale equipment and are not yet experimentally validated, the results of this work strongly suggest that it is indeed possible to minimize the production times and to shorten the product development times for new products. This overall conclusion is based on the following observations: 1) The optimization results show that it is beneficial to produce emulsions with Configuration II: - Configuration II allows the production of emulsions with a bi-modal DSD. No operation procedure was found for the production of such an emulsion in Configuration I. - The production of emulsions in Configuration II is always at least as fast as in Configuration I. 2) The followed approach allows to calculate: * If an emulsion with a certain, predefined, DSD and emulsion viscosity can be produced. * How the process should be controlled in order to produce such an emulsion. * How the process should be controlled to produce this emulsion in minimal time. 3) The optimization results show that it is possible to produce emulsions with: * A bi-modal DSD. * Less oil while maintaining a similar DSD and value of the emulsion viscosity evaluated at a shear rate of 10 1/s by adapting only the operation procedure. Hence, the addition of extra stabilizers is not considered. This offers possibilities for the production of a broader range of emulsion products and could direct product development in a new direction. Based on this, it is worthwhile and therefore recommended to expand this research work in the direction of industrial emulsions.","modeling; emulsions; emulsification; optimization; milp; parameter estimation; fryma-delmix; colloid mill; population balance equations; droplet size distribution; mayonnaise","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:e15f936a-9439-4247-b0f9-051619b34cd4","http://resolver.tudelft.nl/uuid:e15f936a-9439-4247-b0f9-051619b34cd4","Finding new local minima by switching merit functions in optical system optimization","Serebriakov, A.; Bocoirt, F.; Braat, J.","","2005","","optical design; geometrical optics; optimization; merit function; aberrations","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:43fb3a2f-0c02-406a-ad7d-374ec5f71d63","http://resolver.tudelft.nl/uuid:43fb3a2f-0c02-406a-ad7d-374ec5f71d63","Optimization and analysis of deep-UV imaging systems","Serebriakov, A.G.","Braat, J.J.M. (promotor)","2005","This thesis has been devoted to two main subjects: the compensation of birefringence induced by spatial dispersion (BISD) in Deep-UV lithographic objectives and the optimization of optical systems in general.","optimization; lithography; optics","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:ab738b03-b906-4dc7-9e9c-6ac16446af10","http://resolver.tudelft.nl/uuid:ab738b03-b906-4dc7-9e9c-6ac16446af10","Saddle points in the merit function landscape of lithographic objectives","Marinescu, O.; Bociort, F.","","2005","The multidimensional merit function space of complex optical systems contains a large number of local minima that are connected via links that contain saddle points. In this work, we illustrate a method to construct such saddle points with examples of deep UV objectives and extreme UV mirror systems for lithography. The central idea of our method is that, at certain positions in a system with N surfaces that is a local minimum, a thin meniscus lens or two mirror surfaces can be introduced to construct a system with N+2 surfaces that is a saddle point. When the optimization goes down on the two sides of the saddle point, two minima are obtained. We show that often one of these two minima can be reached from several other saddle points constructed in the same way. The practical advantage of saddle-point construction is that we can produce new designs from the existing ones in a simple, efficient and systematic manner.","saddle point; lithography; optimization; optical system design; EUV","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","",""
"uuid:05dfafdc-cd7c-4b17-a92f-8420e5bb78a0","http://resolver.tudelft.nl/uuid:05dfafdc-cd7c-4b17-a92f-8420e5bb78a0","Generating saddle points in the merit function landscape of optical systems","Bociort, F.; Van Turnhout, M.","","2005","Finding multiple local minima in the merit function landscape of optical system optimization is a difficult task, especially for complex designs that have a large number of variables. We discuss here a method that enables a rapid generation of new local minima for optical systems of arbitrary complexity. We have recently shown that saddle points known in mathematics as Morse index 1 saddle points can be useful for global optical system optimization. In this work we show that by inserting a thin meniscus lens (or two mirror surfaces) into an optical design with N surfaces that is a local minimum, we obtain a system with N+2 surfaces that is a Morse index 1 saddle point. A simple method to compute the required meniscus curvatures will be discussed. Then, letting the optimization roll down on both sides of the saddle leads to two different local minima. Often, one of them has interesting special properties.","saddle point; optimization; optical system design; lithography","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","",""
"uuid:1e3ce36d-f1f6-4fbd-9349-42ba2352d668","http://resolver.tudelft.nl/uuid:1e3ce36d-f1f6-4fbd-9349-42ba2352d668","The network structure of the merit function space of EUV mirror systems","Marinescu, O.; Bociort, F.","","2005","The merit function space of mirror systems for EUV lithography is studied. Local minima situated in a multidimensional merit function space are connected via links that contain saddle points and form a network. In this work we present the first networks for EUV lithographic objectives and discuss how these networks change when control parameters, such as aperture and field are varied and constraints are used to limit the variation domain of the variables. A good solution in a network obtained with a limited number of variables has been locally optimized with all variables to meet practical requirements.","network; saddle point; optical system design; EUV lithography; optimization","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","",""
"uuid:a4d313dc-81f6-4f5f-a83a-404f539aa838","http://resolver.tudelft.nl/uuid:a4d313dc-81f6-4f5f-a83a-404f539aa838","Optimization of multilayer reflectors for extreme ultraviolet lithography","Bal, M.F.; Singh, M.; Braat, J.J.M.","","2004","","multilayer; optimization; extreme ultraviolet lithography; graded multilayers; imaging","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","",""
"uuid:c253f0fa-a879-422b-8027-b3de1f91775a","http://resolver.tudelft.nl/uuid:c253f0fa-a879-422b-8027-b3de1f91775a","Avoiding unstable regions in the design space of EUV mirror systems comprising high-order aspheric surfaces","Marinescu, O.; Bociort, F.; Braat, J.","","2004","When Extreme Ultraviolet mirror systems having several high-order aspheric surfaces are optimized, the configurations often enter into highly unstable regions of the parameter space. Small changes of system parameters lead then to large changes in ray paths, and therefore optimization algorithms crash because certain sssumptions upon which they are based become invalid. We describe a technique that keeps the configuration away from the unstable regions. The central component of our technique is a finite-aberration quantity, the so-called quasi-onvariant, which has been originally introduced by H. A. Buchdahl. The quasi-invariant is computed for several rays in the system, and its average change per surface is determined for all surfaces. Small values of these average changes indicate stability. The stabilization technique consists of two steps: First, we obtain a stable initial configuration for subsequent optimization by choosing the system parameters such that the quasi-invariant change per surface is minimal. Then, if the average changes per surfaces of the quasi-invariant remain small during optimization, the configuration is kept in the safe region of the parameter space. This technique is applicable for arbitrary rotationally symmetric optical systems. Examples from the design of aspheric mirror systems for EUV lithography will be given.","mirror systems; aspheres; EUV lithography; optimization; relaxation","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","",""
"uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","http://resolver.tudelft.nl/uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","Dynamic Optimization in Business-wide Process Control","Tousain, R.L.","Bosgra, O.H. (promotor); Backx, A.C.P.M. (promotor)","2002","The chemical marketplace is a global one with strong competition between man- ufacturers. To continuously meet the customer demands regarding product quality and delivery conditions without the need to maintain very large stor- age levels chemical manufactures need to strive for production on demand. In this thesis we research how market-oriented production can be realized for the particular class of multi-grade continuous processes. For this class of processes production on demand is particularly challenging due to the the complex trade- off between performing costly and time-consuming changeovers and maintaining high storage levels. The first requirement for market-oriented production is that production management cooperates with purchasing and sales management. We propose the use of a scheduler as a decision support system in a cooperative organization constituted by these players. In such a scheduler, decision making is represented using decision variables and their effect on the company-wide objective, which is chosen to be the added value of the company, is modeled. The scheduler then selects a decision strategy that is optimal with respect to the objective and presents this strategy to the decision makers who use it to base their actual decision taking on. The company-market interaction is modeled using a transaction-based mod- eling framework. Therein not the actual market behavior is modeled but the expected effect of the interaction of the company with the market. Two types of transactions can be modeled in this framework: orders, which result from contracts with suppliers and customers, and opportunities, which express the expected sales and purchases. Two different approaches to the modeling of production decisions are taken, the choice of which depends largely on the im- plementation of the process control hierarchy that is assumed. In the first approach, production management and control is performed by a single level controller and the control decisions are the minute to minute manipulation of the valves. This approach is academically interesting, though practically in- tractable due to the combination of long horizons and fast sampling times. In the second approach the process control hierarchy consists of a scheduling layer at which it is determined what products will be produced when, and a process control layer which determines how this production is realized. This approach is taken in the rest of the thesis.","chemical processes; optimization; supply chain","en","doctoral thesis","Delft University Press","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:e7367a12-2b86-4e56-931c-0e3bbcb93211","http://resolver.tudelft.nl/uuid:e7367a12-2b86-4e56-931c-0e3bbcb93211","Water Demand Management. Approaches, Experiences and Application to Egypt","Mohamed, A.S.","Van Beek, E. (promotor); Savenije, H.G. (promotor)","2001","","Egypt; demand management; conservation; reuse; new lands; framework for analysis; strategies; criteria; optimization; financial incentives; water resources management","en","doctoral thesis","Delft University Press","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:0bc0134e-c5e8-4062-956d-979d049352a8","http://resolver.tudelft.nl/uuid:0bc0134e-c5e8-4062-956d-979d049352a8","Dynamic Water-System Control - Design and Operation of Regional Water-Resources Systems","Lobbrecht, A.H.","Segeren, W.A. (promotor); Lootsma, F.A. (promotor)","1997","","water management; water resources; control system; real-time control; dynamic control; optimization; successive linear programming; interests; strategy; design","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:6b34b76a-72e7-4922-9a6a-b2f389b53877","http://resolver.tudelft.nl/uuid:6b34b76a-72e7-4922-9a6a-b2f389b53877","Verkenning genetische algorithmen, een hulpmiddel bij de inrichting van een Rijntak","Goossens, J.G.C.M.; Boogaard, H.F.P. van den","","1996","","Waal; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:d1f186a5-6601-4bfb-a72f-9e007977d6e9","http://resolver.tudelft.nl/uuid:d1f186a5-6601-4bfb-a72f-9e007977d6e9","Interior point techniques in optimization: Complementarity, sensitivity and algorithms","Jansen, B.","Lootsma, F.A. (promotor); Boender, C.G.E. (promotor)","1996","","optimization; sensitivity analysis; interior point algorithms","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:241dcfd9-735b-43dd-a1c9-8d8b4d517f86","http://resolver.tudelft.nl/uuid:241dcfd9-735b-43dd-a1c9-8d8b4d517f86","ADAS structures module: Evaluation report","Arendsen, P.; van Dalen, F.; Bill, C.; Rothwell, A.","","1995","In a joint research project between the National Aerospace Laboratory (NLR) and the Faculty of Aerospace Engineering of Delft University of Technology (TUD) a multi-level system for the preliminary design of aircraft structures has been developed. The present system is essentially an extension of the existing Aircraft Design and Analysis System (ADAS). ADAS has been modified to allow the definition of major structural components like ribs, spars, frames, bulkheads, floor structures, and semi-automatically to generate models for air load calculations (panel methods, doublet lattice) and finite element structural optimization. The models can be automatically modified with changes in general design parameters such as wing area, sweepback angle, aspect ratio. In a study to evaluate the system, ADAS has been used to (re)design the principal structure of the Fokker 50. The report gives an overview of the ADAS system, and describes the work done in the Fokker 50 structural design study. Finally results, conclusions and recommendations are presented.","aircraft design; computer aided design; feasibility analysis; finite element method; Fokker aircraft; optimization; software tools; structural design; structural analysis; three dimensional models","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:afd31d18-2efe-4149-afbe-a8f946c7c2c7","http://resolver.tudelft.nl/uuid:afd31d18-2efe-4149-afbe-a8f946c7c2c7","Optimization of design of IMS racing yachts","van Oossanen, P.","","1995","","optimization; yachts","","other","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:717630e4-194c-4d2a-b4d1-d7f3929b5608","http://resolver.tudelft.nl/uuid:717630e4-194c-4d2a-b4d1-d7f3929b5608","User's manual for the computer program CUFUS: Quick design procedure for a CUt-out in a FUSelage version 1.0","Heerschap, M.E.","","1995","","Structural design procedures; cut-outs; pressurized fuselages; finite elements; optimization; sensitivity analysis; NASTRAN; PATRAN","en","report","Delft University of Technology","","","","","","","","Aerospace Engineering","","","","",""
"uuid:e80f3094-dbf5-4df2-b9e5-73e0937e26ec","http://resolver.tudelft.nl/uuid:e80f3094-dbf5-4df2-b9e5-73e0937e26ec","Fuzzy predictive control based on human reasoning","Babuska, R.; Sousa, J.; Verbruggen, H.B.","","1995","","predictive control; fuzzy decision making; optimization; learning","en","conference paper","Delft University of Technology","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:73d07491-d2b7-415a-8713-18f5eecfc25b","http://resolver.tudelft.nl/uuid:73d07491-d2b7-415a-8713-18f5eecfc25b","Similarity transformations between minimal presentations of convex polyhedral cones","ten Dam, A.A.","","1993","A system of linear homogeneous inequalities determines a convex polyhedral cone of feasible solutions. It is investigated under which conditions polyhedral cones can be represented also by a system that contains equalities as well as inequalities. Different representations of the same convex polyhedral cone are related by elementary transformations. It is investigated when representations contain the minimum number of equations necessary to describe a convex polyhedral cone. Moreover, such representations are related by so called similarity transformations. The results presented here can contribute to an easier resolution of problems in optimization theory and control theory.","polyhedrons; cones; convexity; inequalities; similarity theorem; transformations (mathematics); matrices (mathematics); optimization; representations; set theory","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:a65dcff7-5005-4a96-9b25-0789d7ea095a","http://resolver.tudelft.nl/uuid:a65dcff7-5005-4a96-9b25-0789d7ea095a","Lokatiekeuze monsternamestation in de Nieuwe Waterweg: Optimalisatiestudie meetlokatie(s) en methodiek","Bleeker, F.J.; Bons, C.A.","","1993","","waterkwaliteitsmeting; water quality measurement; Nieuwe Waterweg; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:f381200a-8c95-47b7-911e-963241f5d4fc","http://resolver.tudelft.nl/uuid:f381200a-8c95-47b7-911e-963241f5d4fc","Computer aided optimum design of rubble-mound breakwater cross-sections: Manual of the RUMBA computer package, release 1","De Haan, W.","","1989","The computation of the optimum rubble-mound breakwater crosssection is executed on a micro-computer. The RUMBA computer package consists of two main parts: the optimization process is executed by a Turbo Pascal programme, the second part consists of editing functions written in AutoLISP. AutoLISP is the programming language within AutoCAD. The quarry production, divided into a number of categories, and long-term distributions of deep water wave heights and water levels, form the basis of the computation. Concrete armor units have been excluded from the computation. Deep water wave heights are converted to wave heights at site. A set of alternative cross-sections is computed based on both functional performance criteria, and Van der Meer's stability formulae for statically stable structures. Construction costs and maintenance costs are determined of each alternative. The optimum is derived by minimizing the sum of the construction costs and maintenance costs. Moreover, the programme provides means to economize the use of the quarry. At this stage the computer programme is useful for feasibility studies of harbour protection or coastal protection in regions, where use can be made of a quarry in the neighbourhood of the project site and the use of concrete armor units is excluded in advance. Briefly a method is described to extend the computer programme to the use of concrete armor units.","breakwater; armour units; optimization","en","report","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:ee0b82ab-cc7e-499c-ae19-31c104f1a3f9","http://resolver.tudelft.nl/uuid:ee0b82ab-cc7e-499c-ae19-31c104f1a3f9","A perspective of mathematical simulation and optimization techniques in computer aided design","van den Dam, R.F.","","1985","The central issue discussed in this paper is how a designer may profit from the use of mathematical simulation and optimization techniques. These techniques can be useful tools to support the designer in solving his design problem. The place and the potential of these techniques in the design process, as well as their use by the designer, are discussed. The principles underlying these techniques are outlined and an overall view is given of the various methods that can be applied. Examples of applications are presented to illustrate their usefulness in design processes and attention is paid to the integration of these design methods into structured systems for computer aided design. Paper presented at the second International Conference on Computer Applications in Production and Engineering (CAPE '86), Copenhagen, May 20-23, 1986.","mathematical models; optimization; computer aided design; computerized simulation; computerized design; aircraft design; active control; algorithms; computational fluid dynamics","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:9ca1f051-3871-4da9-a7b5-b280e9dbfe47","http://resolver.tudelft.nl/uuid:9ca1f051-3871-4da9-a7b5-b280e9dbfe47","A CAD-system for the design of stiffened panels in wing box structures","Daniels, H.A.M.","","1985","In this paper the CAD system DESTIP is described. DESTIP is developed at the NLR for designing compression panels in wing box structures. Application of the system yields optimal panels in the sense that they meet all the requirements imposed by the designer and have minimum weight per unit width. Weight reductions ranging from 0 to 10% have been achieved compared to designs obtained in a conventional way by experienced designers.","computer aided design; optimization; non-linear programming; finite element methods; wing panels; compressive strength; weight reduction; stiffening; structural design; buckling; computer programs","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:5e16f146-df95-4473-8476-b81e7cd8f781","http://resolver.tudelft.nl/uuid:5e16f146-df95-4473-8476-b81e7cd8f781","Simulation and optimization techniques in computer aided design","van den Dam, R.F.","","1985","The use of numerical simulation and optimization techniques is rapidly expanding in all fields of engineering design. The place and the potential of these techniques in the design process, as well as their use by the designer, are discussed. The principles underlying these techniques are outlined and an overall view is given of the various methods that can be applied. Examples of applications are presented to illustrate their usefulness in design processes. Attention is paid to the integration of these techniques into structured systems for computer aided design, and to the implementation of these systems in the infra-structure of the organisation.","mathematical models; optimization; computer aided design; computerized simulation; mathematical programming; nonlinear optimization; aircraft design; software tools; specifications; systems engineering; structural design; drag reduction; weight reduction","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:619de162-e14f-4e4f-8744-bd461d0b7ea7","http://resolver.tudelft.nl/uuid:619de162-e14f-4e4f-8744-bd461d0b7ea7","Methoden voor het verbeteren van dynamische modellen van constructies","Ottens, H.H.","","1984","A survey of computational methods to improve dynamic analytical models described in the literature is made. Both the matrix methods and the optimal correction method of the dynamic model using the measured modes are discussed.","parameter identification; model response; dynamic models; dynamic structural analysis; stiffness matrix; vibration mode; analysis; finite element methods; optimization; matrix methods","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:f0a8f951-d0d3-4e9c-b730-93a6064b4933","http://resolver.tudelft.nl/uuid:f0a8f951-d0d3-4e9c-b730-93a6064b4933","A survey of computational methods for subsonic and transonic aerodynamic design","Slooff, J.W.","","1984","An overview is provided of computational methods that can be used in solving the design problem of aerodynamics; i.e. the problem of finding the detailed shape of (parts of) configurations of which the gross geometric characteristics have already been determined in a preliminary, overall design process, and that, subject to certain constraints, have to meet given aerodynamic requirements. Attention is focussed on methods for solving the classical inverse problem of aerodynamics and on approaches using optimization techniques. Both methods limited to subsonic flow utilizing panel method technology as well as methods based on finite difference/volume formulations for compressible, transonic flow are covered. In conclusion a discussion is presented of the relative merits of the various computational approaches to the problem of aerodynamic design","computational fluid dynamics; panel methods (fluid mechanics); inverse mode computation; potential flow; aerodynamic configurations; subsonic flow; transonic flow; optimization; pressure distribution; Dirichlet problem; airfoils; boundary value problems; iterative solutions; Neumann problems; algorithms","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:159aa571-3108-471b-a7e7-50a39e4ad7a4","http://resolver.tudelft.nl/uuid:159aa571-3108-471b-a7e7-50a39e4ad7a4","A system for computer aided analysis and design of multi-element airfoils","Labruijere, T.E.","","1983","A program system has been developed as a tool for interactive analysis and design of multi-element airfoils in incompressible viscous flow. A global description of the system is given. It involves the application of three computational methods, one for the analysis of viscous flow and one for the analysis of inviscid flow around multi-element airfoils and one for the design of multi-element airfoils in inviscid flow. The latter two methods are described in some detail. The essential features of the design method are illustrated by means of numerical results.","airfoil profiles; lifting devices; computational fluid dynamics; singularity (mathematics); potential flow; spline functions; panel methods (fluid dynamics); two-dimensional flow; incompressible flow; viscous flow; pressure distribution; computer aided design; computer programs; optimization","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:3a4a1ebc-f64a-4fba-8d46-b62dd47ca290","http://resolver.tudelft.nl/uuid:3a4a1ebc-f64a-4fba-8d46-b62dd47ca290","Illustrative examples of optimization techniques for quantitative and qualitative water management: Report on investigation","Verhaeghe, R.J.; Tholen, N.","","1983","","waterbeheer; water resources management; waterkwaliteit; water quality; optimalisering; optimization","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:b9c9066f-c45a-4f9f-b813-bb9ec0e91b04","http://resolver.tudelft.nl/uuid:b9c9066f-c45a-4f9f-b813-bb9ec0e91b04","SAMID: An interactive System for the Analysis and Constrained Minimization of Induced Drag of aircraft configurations","van den Dam, R.F.","","1982","An interactive computer program system has been developed which provides induced-drag analysis, optimization and configuration-design capabilities. The program system employs subsonic far-field (Trefftzplane) analysis, and novel mathematical formulations of the constrained optimization problems which are based on calculus of variations. The analysis and optimization technique, utilizing panel-method technology with piecewise quadratically varying bound-circulation, is fast, numerically stable and easy to use, and therefore is very suitable for interactive design piorposes in which rapid configuration trade-offs have to be made. The paper presents an outline of the induced-drag analysis and optimization technique, comparisons with other theories and the interactive design capability. Paper presented at the AIAA 21st Aerospace Sciences Meeting, January 10-13, 1983, Reno, Nevada.","subsonic flow; wing span; optimization; calculus of variations; panel method (Fluid mechanics); vortex sheets; far fields; aircraft design; aerodynamic configurations; computerized design; minimum drag; aerodynamic drag; computer programs","en","report","","","","","","","Campus only","","","","","","",""
"uuid:4d4806a8-3c2d-4e3e-abe2-f0a40476ef72","http://resolver.tudelft.nl/uuid:4d4806a8-3c2d-4e3e-abe2-f0a40476ef72","Optimalisatie op basis van lineair programmeren (LP) en dynamisch programmeren (DP): Mogelijkheden en beperkingen","Abraham, G.; Beek, E. van","","1982","","beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; programmering; programming; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:09369434-a255-45f4-a816-baa09f830394","http://resolver.tudelft.nl/uuid:09369434-a255-45f4-a816-baa09f830394","Optimalisatietechnieken in kwantitatief waterbeheer: Ontwerp van beheerstrategieën in PAWN","Samson, J.; Dijkman, J.P.M.","","1981","","beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:7a5d1107-50a0-40ea-815e-c26a3924ff49","http://resolver.tudelft.nl/uuid:7a5d1107-50a0-40ea-815e-c26a3924ff49","The design and aerodynamic characteristics of an 18% thick shock-free airfoil (NLR 7501)","van Egmond, J.A.; Rozendal, D.","","1978","The design and experimental verification of a thick (18 %), shock free airfoil is described. The design was performed, using the NLR hodograph theory for transonic airfoil design. The airfoil was experimentally investigated in the NLR Pilot tunnel.","transonic flow; supercritical wings; airfoil profiles; aerodynamic configurations; scale effect; wind-tunnel tests; pressure distribution; design; flow charts; optimization","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:d42a86c7-b46c-471f-ad18-2e74cc461b74","http://resolver.tudelft.nl/uuid:d42a86c7-b46c-471f-ad18-2e74cc461b74","Optimalisatietechnieken in kwantitatief en kwalitatief waterbeheer","Verhaeghe, R.J.","","1978","","waterbeheer; water resources management; waterkwaliteit; water quality; grondwaterbeheer; groundwater management; watervoorziening; water supply; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:44c16514-bb37-402e-90a3-0df1e1e6ffd0","http://resolver.tudelft.nl/uuid:44c16514-bb37-402e-90a3-0df1e1e6ffd0","Airfoil design by the method of singularities via parametric optimization of a geometrically constrained least squares object function","Labrujere, T.E.","","1976","The present report describes a method for the design of airfoils in incompressible flow which, subject to geometrical constraints generate approximately a given distribution of the pressure coefficient along the projection of the airfoil chord on an axis parallel to the onset flow. The flow is simulated by means of a distribution of singularities along the airfoil contour. The method is based on parametric optimization of a constrained least squares error function. The method results in the determination of the a priori unknown airfoil contour together with the strengths of the singiilarities.","singularity (mathematics); potential flow; incompressible flow; optimization; least squares method; airfoils; weighting functions","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:54af58b2-cbbf-482d-a706-d5c5b27e7c7d","http://resolver.tudelft.nl/uuid:54af58b2-cbbf-482d-a706-d5c5b27e7c7d","An hierarchic multi-suppliers computer network of a research laboratory with two settlements","Loeve, W.","","1975","For the aerospace laboratory NLR in the Netherlands the considerations are discussed that formed the basis of along-term plan to realize an optimal computer network. In the optimization process up to now the laboratory switched from a situation in which in each of both settlements one central close shop computer centre was available to a situation where a multi-suppliers hierarchic computer network exists to serve both settlements. This concerns processing of data from experiments, from a management information and from mathematical models for simulation.","CDC computers; computer-network; optimization; laboratories; real time operations","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:3bfeced0-7f7b-4cda-82a3-be291e9d8ffe","http://resolver.tudelft.nl/uuid:3bfeced0-7f7b-4cda-82a3-be291e9d8ffe","Conception de réseau iBGP","Buob, M.O.; Uhlig, S.; Meulle, M.","","","BGP is used today by all Autonomous Systems (AS) in the Internet. Inside each AS, iBGP sessions distribute the external routes among the routers. In large ASs, relying on a fullmesh of iBGP sessions between routers is not scalable, so route-reflection is commonly used. The scalability of route-reflection compared to an iBGP full-mesh comes at the cost of opacity in the choice of best routes by the routers inside the AS. This opacity induces problems like suboptimal route choices in terms of IGP cost, deflection and forwarding loops. In this work, we propose a solution to design iBGP route-reflection topologies which lead to the same routing as with an iBGP full-mesh and having a minimal number of iBGP sessions. Moreover we compute a robust topology even if a single node or link failure occurs. We apply our methodology on the network of a tier-1 ISP. Twice as many iBGP sessions are required to ensure robustness to single IGP failure. The number of required iBGP sessions in our robust topology is however not much larger than in the current iBGP topology used in the tier-1 ISP network.","BGP; route-reflection; IBGP topology design; optimization","en","conference paper","CFIP","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Network Architectures and Services","","","",""