"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:b8954e95-15d9-430d-b026-71f4cf99ef23","http://resolver.tudelft.nl/uuid:b8954e95-15d9-430d-b026-71f4cf99ef23","On ice mechanics in ice-induced vibrations","Owen, C.C. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); Hendrikse, H. (copromotor); Delft University of Technology (degree granting institution)","2024","The imminence of anthropogenic climate change has motivated a global energy transition towards sustainable power generation. Offshore wind—an important contributor to the energy transition—is expanding, not only in turbine size and number of installations, but also into regions with harsher environmental conditions. One of those conditions in places such as the Baltic Sea is drift ice. Offshore wind turbine support structures, with vertical sides at the waterline, must be designed to survive dynamic ice-structure interaction when ice fails in crushing against the structure. For a safe and efficient design of the support structure, dynamic ice-structure interaction resulting in ice-induced vibrations must be considered. Therefore, both an understanding of the problem and accurate modeling for the prediction of the development of ice-induced vibrations are required.
Significant progress has been made in recent years on the topic of ice-induced vibrations, and a numerical model for prediction of ice-induced vibrations has been developed based on the principles of velocity-dependent deformation and failure behavior of ice, and contact area variation between ice and structure during interaction. However, uncertainty remains regarding physical mechanisms within the ice which govern ice-induced vibrations. The ice mechanics involved in the development of ice-induced vibrations is therefore the main topic of this thesis.
The main objective was to investigate and identify the ice mechanics involved in the development of ice-induced vibrations, especially in the regime of frequency lock-in as historically defined. It was hypothesized that dynamic recrystallization played a relevant role in the ice mechanics involved in ice-induced vibrations. To test the hypothesis, ice mechanics experiments were performed at the ice laboratory specifically developed at Delft University of Technology for this purpose.
To identify grain-scale mechanisms in ice, such as dynamic recrystallization, a method was devised to elucidate ice thin section textures and (quarter) fabrics by means of crossed-polarized transmitted light and interference coloration of ice. An attempt was made to apply the method to the laboratory experiments which applied compressive loading to the edge of a thin freshwater columnar-grained ice plate, laterally confined by glass plates. Crossed-polarized transmitted light was shone through the glass plates to observe the grain structure of the ice during cyclic compression with a haversine velocity waveform. The loading and confinement scenario was intended to reproduce a vertical section of the ice edge during frequency lock-in vibrations. The experimental design demonstrated that the grain-scale mechanics of dynamic recrystallization did not obviously contribute to the peak load-velocity relation associated with frequency lock-in vibrations. As expected, fracture initiated on the grain scale was responsible for load drops. But, more interestingly, stress relaxation during periods of low relative velocity between ice and structure occurred rapidly. Following the stress relaxation, when velocity increased, the peak load was higher than previous brittle peak loads. The results indicated that the mechanisms involved in the stress relaxation were occurring on a scale smaller than the grain size. A loading path dependency was also observed with respect to the peak load-velocity relation.
Ice penetration experiments at the Aalto Ice and Wave Tank in ethanol-doped cold model ice were performed with a rigid structure, controlled oscillation, and a single-degree-of-freedom structure, and comparison of results showed that the peak global ice loads depended on the amount of time spent at low relative velocities where an ice strengthening effect developed. This has implications for the so-called velocity effect and compliance effect in design of structures subject to dynamic ice-structure interaction.
Overall, the load signals from the ice mechanics experiments on freshwater ice resembled the load signals obtained from the controlled-oscillation experiments from the model-scale ice tank tests. The qualitatively similar velocity and resulting load patterns give confidence in the idea that the mechanisms involved in both types of experiments were similar, even for different ice types and loading scenarios.
These similar results demonstrate a link in the ice mechanics across different ice types and loading scenarios, which may be explained with further research on path-dependent constitutive ice behavior, and with scrutiny regarding ice dislocation and grain boundary mechanics. Suggestions for future research are proposed, including the testing of strain rate-varying uniaxial compression of ice and ice penetration experiments with haversine velocity waveforms.","dynamic ice-structure interaction; ice-induced vibrations; frequency lock-in; c-axis; interference coloration; ice microstructure; ice fabric; ice texture; image processing; birefringence; grain boundary; controlled oscillation; ice failure length; anelasticity; ice crushing; model tests; compliance effect; velocity effect","en","doctoral thesis","","978-94-6366-819-4","","","","","","2024-04-08","","","Offshore Engineering","","",""
"uuid:33283954-fd1d-40c9-a6bf-7bd020350bbe","http://resolver.tudelft.nl/uuid:33283954-fd1d-40c9-a6bf-7bd020350bbe","Context-specific value inference via hybrid intelligence","Liscio, E. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); Murukannaiah, P.K. (copromotor); Delft University of Technology (degree granting institution)","2024","Human values are the abstract motivations that drive our opinions and actions. AI agents ought to align their behavior with our value preferences (the relative importance we ascribe to different values) to co-exist with us in our society. However, value preferences differ across individuals and are dependent on context. To reflect diversity in society and to align with contextual value preferences, AI agents must be able to discern the value preferences of the relevant individuals by interacting with them. We refer to this as the value inference challenge, which is the focus of this thesis. Value inference entails several challenges and the related work on value inference is scattered across different AI subfields. We present a comprehensive overview of the value inference challenge by breaking it down into three distinct steps and showing the interconnections among these steps.","Values; Natural Language Processing; Morality; Ethics; Explainable AI; Active Learning; Hybrid Intelligence","en","doctoral thesis","","978-94-6366-840-8","","","","","","","","","Interactive Intelligence","","",""
"uuid:80a31436-92dd-4d85-89c6-e8d2e0f5d666","http://resolver.tudelft.nl/uuid:80a31436-92dd-4d85-89c6-e8d2e0f5d666","Computation-in-Memory for Modern Applications using Emerging Technologies","Shahroodi, T. (TU Delft Computer Engineering)","Wong, J.S.S.M. (promotor); Hamdioui, S. (promotor); Delft University of Technology (degree granting institution)","2024","Modern applications like Genomics and Machine Learning (ML) hold the potential to reshape our understanding of diseases’ genetic origins and guide machines in executing tasks and making predictions without our explicit programming. The successful, widespread integration of these modern applications can usher in advancements in di-agnostics, individualized medicine, and routine tasks such as language interpretation, image analysis, and object categorization. However, our traditional computing infrastructures fall short when accommodating the distinct characteristics of these new applications. Specifically, (1) these applications handle an immense and ever-expanding data working set, and (2) each succeeding version of these applications and their associated use cases necessitates quicker and more energy-efficient analysis of these vast data sets. This is because our traditional computing systems largely hinge on (1) the von-Neumann architecture, a design that distinctly positions processing entities (like CPUs and GPUs) away from storage components (like memories and flash drives), and (2) the CMOS-based technology. While attempting to meet the performance and energy demands of our modern applications, these fully CMOS-based systems based on von-Neumann architecture have increasingly struggled and hit inherent roadblocks, with data movement overhead being the predominant issue.
To alleviate the data movement bottleneck, contemporary research revisits a concept historically known as Computation-In-Memory (CIM) or, alternatively, Processing-In-Memory (PIM). At its core, CIM emphasizes positioning computational capabilities close to, or within, the memory units storing the data. This placement might be within memory chips, in memory controllers, amid caches, or embedded in the logic layers of 3D-stacked memories. As a computational model, architectures leveraging CIM (referred to as CIM architectures) stand to tackle the issue of data movement overhead inherent in the von-Neumann architecture by diminishing or outright eradicating the data movement between computational locales and data storage areas. Moreover, from a techno-logical perspective, emerging memory technologies, including memristive devices and circuits, show potential to replace traditional memory systems, addressing some of the challenges posed by CMOS-based designs.
Irrespective of the specific CIM architecture deployed to optimize performance or energy efficiency in modern applications, there are substantial practical challenges to address and ponder upon first. Both system designers and developers face these hurdles and design decisions, which are critical to surmount CIM’s widespread acceptance across various computational areas and application domains.
In this dissertation, our focus is twofold: (1) We delve into the acceleration and streamlined execution of various steps in two pivotal application realms: genomics and ML; and (2) We explore several emerging memory technologies alongside circuit and architectural strategies, that show promise in enhancing CIM designs, specifically tailored for modern applications.
Therefore, in this thesis, we identify and propose strategies and designs to ameliorate the constrained performance of key kernels in genomics and ML. Recognizing that applications within these realms consist of diverse functions or kernels, it is imperative for a designer to possess a thorough understanding of them. Each function/kernel can be characterized by distinct data and control flows, calling for varied features to be enabled in either a von-Neumann or a CIM architecture. To enhance the efficacy of each function/kernel, we first profile them individually and then within a larger context of their corresponding pipeline, followed by discerning the best avenues for their memory mapping in a CIM architecture. We then undertake a concurrent assessment of essential adjunct components alongside the memory array, commonly referred to as the peripheries. For a designer, proficiency in the applications executable on a CIM system leveraging emerging memory technologies is indispensable. Grasping the fundamental characteristics of CIM and having an overarching view of its scope becomes vital prior to its integration. We aim to aggregate critical application features, improvement opportunities, and design decisions and refine them to their core essence. Through this, we aspire to shed light on present design options and identify kernels demanding heightened attention. Such insights can be instrumental in revealing prospective directions, encompassing supported kernels along with their respective merits and trade-offs.
We exploit emerging technologies and architect state-of-the-art CIM designs that optimally serve the targeted kernels, keeping a holistic improvement perspective at the forefront. Delving into emerging (memory) technologies, such as memristive devices like PCM and STT-MRAM, is crucial. These devices provide a suite of advantages, including non-volatility, compactness, and a natural aptitude for conducting logical operations (for instance, the logical AND). Additionally, other emerging technologies, such as integrated photonics, have the potential to enhance the CIM paradigm further with their capacity for high-frequency and low-latency functions. Our ambition is to integrate multiple such technologies, harnessing their distinct attributes, to craft a CIM design that surpasses the SotA counterparts across key benchmarks, be it in execution speed or energy.
This thesis demonstrates that when CIM is fused with emerging (memory) technologies, there is a marked enhancement in the performance of several Genomics pipelines and Machine Learning applications. It is our aspiration and conviction that the evaluations, methodologies, and findings detailed in this dissertation will empower the broader community to comprehend and address contemporary and upcoming challenges that revolve around enhancing the performance and energy efficiency of modern applications through the integration of (re)emerging computing paradigms and technologies. Additionally, our work provides insights for adapting these technologies to novel applications, ensuring they deliver optimal benefits.","Computation-In-Memory; Processing-in-Memory; Bioinformatics; Computer Architecture; Hardware/Software Co-Design; Memristor","en","doctoral thesis","","978-94-6384-534-2","","","","","","","","","Computer Engineering","","",""
"uuid:2c93f1af-bf49-4353-b9b9-c6ed8d62d3c9","http://resolver.tudelft.nl/uuid:2c93f1af-bf49-4353-b9b9-c6ed8d62d3c9","Epidemics on Static and Adaptive Networks","Achterberg, M.A. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kooij, Robert (promotor); Delft University of Technology (degree granting institution)","2024","The COVID-19 pandemic has had a disruptive impact on healthcare systems and everyday life of the majority of the people around the globe. Despite many years of research on network epidemiology, many key aspects of disease transmission and in particular the response of people to the spread of a disease, remain poorly understood. On the basis of epidemiological modelling lie the Susceptible-Infected-Susceptible (SIS) and Susceptible-Infected-Recovered (SIR) models. In this dissertation, we aim to improve the understanding of the spread of contagious diseases, with an emphasis on the interplay between disease spread and personal behaviour, applied to the SIS and SIR models. The first part starts with the analysis of the eigenvalue spectrum of the infinitesimal generator of the Markovian SIS model with self-infections (Chapter 2). Based on the eigenvalue spectrum, which we believe encodes the majority of the dynamics, we derive an alternative definition of the epidemic threshold. We show that the epidemic threshold approximately coincides with the effective infection rate for which the third-largest eigenvalue is minimal. Contrary to the SIS process, where only an eigenvalue analysis is possible, the SIR process is completely solved on an arbitrary, heterogeneous network (Chapter 3). The benefit of the exact solution is demonstrated by analytically computing the time when the number of infections is maximal. The second part concerns the interplay between the spread of a disease and the response of people to the disease spread. We develop the Generalised Adaptive SIS (GASIS) model to describe how individuals break and create links in the contact graph. The decisions for breaking or creating links are based on the viral state of the nodes attached to that link. For all 36 instances in the G-ASIS model, we analyse the relation between the epidemic threshold and the effective link-breaking rate (Chapter 4). We derive the first-order and second-order mean-field approximation of the G-ASIS model (Chapter 5) and illustrate that the second-order approximation is able to qualitatively approximate the Markovian model more accurately than the first-order approximation. The G-ASIS mean-field model is extended to arbitrary link-breaking and link-creation responses, which are not only related to the number of susceptible and infectious neighbours of a node, but may also depend on the presence of the virus in the whole population (Chapter 6). For all possible link-breaking and link-creation responses, epidemic waves cannot occur in the mean-field adaptive SIS process. In the final part,we develop theNetwork-Inference-based Prediction Algorithm(NIPA) for forecasting the spread of contagious diseases on heterogeneous networks (Chapter 7). The contact graph is assumed to be unknown and is inferred by NIPA from the number of reported cases. NIPA is a hybrid method, combining epidemiological knowledge, machine-learning and networks. Network-based forecasting, and NIPA in particular, seems favourable for predicting epidemic outbreaks, which is demonstrated by showing that NIPA outperforms many other forecasting algorithms for estimating the spread of COVID-19.","Mathematical epidemiology; Adaptive networks; Markov processes","en","doctoral thesis","","978-94-6384-514-4","","","","","","","","","Network Architectures and Services","","",""
"uuid:2a5f61d3-dc5a-4d86-ae1a-79a647d37036","http://resolver.tudelft.nl/uuid:2a5f61d3-dc5a-4d86-ae1a-79a647d37036","Doppler Spectrum Parameter Estimation for Weather Radar Echoes Using a Parametric Semianalytical Model","Dash, T.K. (TU Delft Microwave Sensing, Signals & Systems); Driessen, J.N. (TU Delft Microwave Sensing, Signals & Systems); Krasnov, O.A. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2024","The problem of the limited accuracy of precipitation Doppler spectrum moments estimation measured by fast azimuthally scanning weather radars is addressed. A novel approach for the Doppler moment estimation based on maximum likelihood estimation is proposed. A simplified semianalytical parametric model for the precipitation power spectral density (PSD) as a function of the velocity parameters of the scatterers and the finite radar observation time is derived for typical precipitation-like weather conditions. An inverse problem for estimating the Doppler moments from measurements of the PSD is formulated and solved. It is demonstrated that the variance of the estimation of the Doppler moments approaches the Cramer Rao Lower Bound (CRB) when the observation time approaches infinity. The performance of the proposed approach is compared with some classical techniques and another realization of the maximum likelihood approach based on simulated and experimental data. The results indicate the superiority of the proposed approach, especially for short observation time. Furthermore, a scanning strategy to accurately estimate the Doppler moments based on the true velocity dispersion of the scatterers is provided with the help of the proposed approach.","Doppler velocity retrieval; parametric spectrum estimation; radar signal processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-30","","","Microwave Sensing, Signals & Systems","","",""
"uuid:a97d8a06-8661-4755-8548-350a7736ef6b","http://resolver.tudelft.nl/uuid:a97d8a06-8661-4755-8548-350a7736ef6b","Process competences to incorporate in higher education curricula","Nijhuis, S. A. (University of Twente); Endedijk, M. D. (University of Twente); Kessels, W. F. M. (University of Twente); Vrijhoef, R. (TU Delft Design & Construction Management)","","2024","This study reports on a survey on project managers' priorities. The survey used ISO 21500 as a scaffold to ask various respondents, like junior, experienced, and senior project managers, project sponsors, and students, to share their perceptions on the priorities for junior project managers. The respondent groups shared similar perceptions. Furthermore, project type and sector had little effect on junior project managers' priorities. Experienced and senior project managers shared their own priorities as well. The perceptions of priorities for junior, experienced, and senior project managers were mostly alike. However, experienced and senior project managers' priorities seemed slightly more affected by project type and sector. A session with experts in project management and teaching project management highlighted that the results for junior project managers could provide accents for introducing project management to students in higher education, provided the entire playing field of project management is also introduced.","Competences; Experience; Higher education; Processes; Project types; Respondent types","en","journal article","","","","","","","","","","","Design & Construction Management","","",""
"uuid:a379dd7e-0cf9-4f42-98ed-abbed8cd8a67","http://resolver.tudelft.nl/uuid:a379dd7e-0cf9-4f42-98ed-abbed8cd8a67","Enhanced isobutanol recovery from fermentation broth for sustainable biofuels production","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2024","Isobutanol is a highly attractive renewable alternative to conventional fossil fuels, with superior fuel properties as compared to ethanol and 1-butanol. Even though the isobutanol production by fermentation has significant potential, complex downstream processing is limiting the wide-spreading of this technology. Accordingly, this original research significantly contributes to the advancement in industrial biofuel production by developing two eco-efficient downstream processes for the industrial-scale recovery of isobutanol (production capacity 50 ktonneIBUT/y), from a highly dilute fermentation broth (>98 wt% water). Vacuum distillation and a novel hybrid combination of gas stripping and vacuum evaporation were coupled with atmospheric azeotropic distillation to recover over 99.9 % of isobutanol as a high-purity product (100 wt%). Advanced heat pumping and heat integration techniques were further implemented to allow the complete electrification of these recovery processes. Furthermore, implementation of these techniques significantly decreased total annual costs (0.131–0.161 $/kgIBUT), reduced energy requirements (0.488–0.807 kWeh/kgIBUT) and lowered CO2 emissions (0.303–0.449 kgCO2/kgIBUT), resulting in highly competitive purification processes. In addition to efficiently recovering isobutanol, the designed downstream processes provide the potential to enhance the fermentation process by recycling all present microorganisms and reducing water demand. Therefore, the results of this original research substantially contribute to the advancement in industrial biotechnology and the wide-spreading of biofuel production.","Biofuels; Dividing-wall column; Downstream processing; Gas stripping with vacuum evaporation; Industrial biotechnology; Isobutanol","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:4d8724f7-f7b6-4a8d-86b5-b52cd5097797","http://resolver.tudelft.nl/uuid:4d8724f7-f7b6-4a8d-86b5-b52cd5097797","Photo-oxidation of Micro-and Nanoplastics: Physical, Chemical, and Biological Effects in Environments","Xu, Yanghui (TU Delft Sanitary Engineering; Chinese Academy of Sciences); Ou, Q. (TU Delft Sanitary Engineering; Chinese Academy of Sciences); van der Hoek, J.P. (TU Delft Sanitary Engineering; Waternet); Liu, G. (TU Delft Sanitary Engineering; Chinese Academy of Sciences); Lompe, K.M. (TU Delft Sanitary Engineering)","","2024","Micro- and nanoplastics (MNPs) are attracting increasing attention due to their persistence and potential ecological risks. This review critically summarizes the effects of photo-oxidation on the physical, chemical, and biological behaviors of MNPs in aquatic and terrestrial environments. The core of this paper explores how photo-oxidation-induced surface property changes in MNPs affect their adsorption toward contaminants, the stability and mobility of MNPs in water and porous media, as well as the transport of pollutants such as organic pollutants (OPs) and heavy metals (HMs). It then reviews the photochemical processes of MNPs with coexisting constituents, highlighting critical factors affecting the photo-oxidation of MNPs, and the contribution of MNPs to the phototransformation of other contaminants. The distinct biological effects and mechanism of aged MNPs are pointed out, in terms of the toxicity to aquatic organisms, biofilm formation, planktonic microbial growth, and soil and sediment microbial community and function. Furthermore, the research gaps and perspectives are put forward, regarding the underlying interaction mechanisms of MNPs with coexisting natural constituents and pollutants under photo-oxidation conditions, the combined effects of photo-oxidation and natural constituents on the fate of MNPs, and the microbiological effect of photoaged MNPs, especially the biotransformation of pollutants.","Microplastics; Photo-oxidation; Physical Effects; Photochemical Processes","en","review","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:e316ce2d-d063-4e7b-a686-0c74d3b4905a","http://resolver.tudelft.nl/uuid:e316ce2d-d063-4e7b-a686-0c74d3b4905a","Multi-Sensor Seismic Processing Approach using Geophones and HWC DAS in the Monitoring of CO2 Storage at the Hellisheiði Geothermal Field in Iceland","Bellezza, Cinzia (OGS-National Institute of Oceanography and Applied Geophysics–); Barison, Erika (OGS-National Institute of Oceanography and Applied Geophysics–); Farina, Biancamaria (OGS-National Institute of Oceanography and Applied Geophysics–); Poletto, Flavio (OGS-National Institute of Oceanography and Applied Geophysics–); Meneghini, Fabio (OGS-National Institute of Oceanography and Applied Geophysics–); Böhm, Gualtiero (OGS-National Institute of Oceanography and Applied Geophysics–); Draganov, D.S. (TU Delft Applied Geophysics and Petrophysics); Janssen, M.T.G. (TU Delft Applied Geophysics and Petrophysics); van Otten, Gijs (Seismic Mechatronics BV)","","2024","Geothermal power production may result in significant CO2 emissions as part of the produced steam. CO2 capture, utilisation, subsurface storage (CCUS) and developments to exploit geothermal resources are focal points for future clean and renewable energy strategies. The Synergetic Utilisation of CO2 Storage Coupled with Geothermal Energy Deployment (SUCCEED) project aims to demonstrate the feasibility of using produced CO2 for re-injection in the geothermal field to improve geothermal performance, while also storing the CO2 as an action for climate change mitigation. Our study has the aim to develop innovative reservoir-monitoring technologies via active-source seismic data acquisition using a novel electric seismic vibrator source and permanently installed helically wound cable (HWC) fibre-optic distributed acoustic sensing (DAS) system. Implemented together with auxiliary multi-component (3C and 2C) geophone receiver arrays, this approach gave us the opportunity to compare and cross-validate the results using wavefields from different acquisition systems. We present the results of the baseline survey of a time-lapse monitoring project at the Hellisheiði geothermal field in Iceland. We perform tomographic inversion and multichannel seismic processing to investigate both the shallower and the deeper basaltic rocks targets. The wavefield analysis is supported by seismic modelling. The HWC DAS and the geophone-stacked sections show good consistency, highlighting the same reflection zones. The comparison of the new DAS technology with the well-known standard geophone acquisition proves the effectiveness and reliability of using broadside sensitivity HWC DAS in surface monitoring applications.","CO2 injection monitoring; geothermal reservoir; CCUS; surface seismic processing; distributed acoustic sensing (DAS); geophones","en","journal article","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:a019c126-b976-4dff-95f1-65f1ec6560ab","http://resolver.tudelft.nl/uuid:a019c126-b976-4dff-95f1-65f1ec6560ab","Guest Editorial: Advances in AI-assisted radar sensing applications","Vishwakarma, Shelly (University of Southampton); Chetty, Kevin (University College London (UCL)); Le Kernec, Julien (University of Glasgow); Chen, Qingchao (Peking University); Adve, Raviraj (University of Toronto); Gurbuz, Sevgi Zubeyde (University of Alabama); Li, Wenda (Heriot-Watt University); Ram, Shobha Sundar (Indraprastha Institute of Information Technology Delhi (IIIT-Delhi)); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2024","","artificial intelligence; convolutional neural nets; radar signal processing; radar target recognition","en","contribution to periodical","","","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:40d7eeec-5eeb-4413-b9b5-85e58ef96512","http://resolver.tudelft.nl/uuid:40d7eeec-5eeb-4413-b9b5-85e58ef96512","Process design and downstream optimization of the direct synthesis route for cleaner production of dimethyl ether from biogas","Fedeli, M. (Politecnico di Milano; Université de Toulouse); Negri, F. (Politecnico di Milano; Itelyum Regeneration Spa, Lodi); Bornazzini, A. (TU Delft ChemE/Product and Process Engineering; Politecnico di Milano); Montastruc, L. (Université de Toulouse); Manenti, F. (Politecnico di Milano); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2024","This study investigates an innovative method to produce dimethyl ether (DME) by direct synthesis from syngas derived from biogas. The proposed process was rigorously simulated in Aspen Plus, highlighting the main sections: (i) biogas tri-reforming, (ii) dimethyl-ether synthesis, and (iii) DME purification. The tri-reforming section has a CO2 and CH4 conversion of 27.3% and 96.2%, respectively A novel catalyst suitable for CO2-rich feed was chosen for the DME production to allow 60% conversion of CO2. Product separation is achieved via several absorption and distillation columns, ensuring that the operating conditions are kept mild to avoid expensive refrigeration. An optimization analysis was performed to identify the most suitable layout of the downstream process. This was identified through the evaluation of performance indicators such as utility usage and operating expenses. A wide range of purification strategies have been evaluated, and two scenarios are proposed based on the results. Configuration A produces 5.34 ktpy DME and 1.26 ktpy methanol, while Configuration B produces exclusively 6.21 ktpy DME. The process configurations were analysed by means of key techno-economic indicators and sustainability metrics. Both processes have an energy intensity of 14.5 kWh/kg. The reforming unit has a negligible footprint as it is thermally sustained from biogas combustion, but the reboilers are the main contributors for plant CO2 emissions. Configuration B has the best economic value with 11,634 k€ of NPV after 25 years and a payback time of 4 years.","DME direct synthesis; Green processing; Process optimization; Process simulation; Waste-to-Fuel","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:752ccb09-e90a-4ffe-a9ac-f94905118023","http://resolver.tudelft.nl/uuid:752ccb09-e90a-4ffe-a9ac-f94905118023","Machine learning in process systems engineering: Challenges and opportunities","Daoutidis, Prodromos (University of Minnesota Twin Cities); Lee, Jay H. (University of Southern California); Rangarajan, Srinivas (Lehigh University); Chiang, Leo (The Dow Chemical Company); Gopaluni, Bhushan (University of British Columbia); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering); Harjunkoski, Iiro (Aalto University); Mercangöz, Mehmet (Imperial College London); Mesbah, Ali (University of California)","","2024","This “white paper” is a concise perspective of the potential of machine learning in the process systems engineering (PSE) domain, based on a session during FIPSE 5, held in Crete, Greece, June 27–29, 2022. The session included two invited talks and three short contributed presentations followed by extensive discussions. This paper does not intend to provide a comprehensive review on the subject or a detailed exposition of the discussions; instead its aim is to distill the main points of the discussions and talks, and in doing so, highlight open problems and directions for future research. The general conclusion from the session was that machine learning can have a transformational impact on the PSE domain enabling new discoveries and innovations, but research is needed to develop domain-specific techniques for problems in molecular/material design, data analytics, optimization, and control.","Control; Machine learning; Modeling; Molecule discovery; Optimization; Process monitoring","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-22","","","ChemE/Product and Process Engineering","","",""
"uuid:882cb88b-5341-45df-ab4f-a8a19fc399f2","http://resolver.tudelft.nl/uuid:882cb88b-5341-45df-ab4f-a8a19fc399f2","Data-driven product-process optimization of N-isopropylacrylamide microgel flow-synthesis","Kaven, Luise F. (Rheinisch-Westfälische Technische Hochschule); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering); Keil, Jan (Rheinisch-Westfälische Technische Hochschule); Israel, Jana (Rheinisch-Westfälische Technische Hochschule); Wolter, Nadja (DWI-Leibniz Institute for Interactive Materials; Rheinisch-Westfälische Technische Hochschule); Mitsos, Alexander (Rheinisch-Westfälische Technische Hochschule)","","2024","Microgels are cross-linked, colloidal polymer networks with great potential for stimuli-response release in drug-delivery applications, as their small size allows them to pass human cell boundaries. For applications with specified requirements regarding size, producing tailored microgels in a continuous flow reactor is advantageous because the microgel properties can be controlled tightly. However, no fully-specified mechanistic models are available for continuous microgel synthesis, as the physical properties of the included components are only studied partly. To address this gap and accelerate tailor-made microgel development, we propose a data-driven optimization in a hardware-in-the-loop approach to efficiently synthesize microgels with defined sizes. We optimize the synthesis regarding conflicting objectives (maximum production efficiency, minimum energy consumption, and the desired microgel radius) by applying Bayesian optimization via the solver “Thompson sampling efficient multi-objective optimization” (TS-EMO). We validate the optimization using the deterministic global solver “McCormick-based Algorithm for mixed-integer Nonlinear Global Optimization” (MAiNGO) and verify three computed Pareto optimal solutions via experiments. The proposed framework can be applied to other desired microgel properties and reactor setups and has the potential of efficient development by minimizing number of experiments and modeling effort needed.","Bayesian optimization; Flow-chemistry; Microgel synthesis; Product-process optimization","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-07-01","","","ChemE/Product and Process Engineering","","",""
"uuid:ebe708b5-dd7f-465e-9a26-7aa460a4957c","http://resolver.tudelft.nl/uuid:ebe708b5-dd7f-465e-9a26-7aa460a4957c","Compensating torque ripples in a coarse pointing mechanism for free-space optical communication: A Gaussian process repetitive control approach","Mooren, Noud (Eindhoven University of Technology); van Meer, Max (Eindhoven University of Technology); Witvoet, Gert (Eindhoven University of Technology; TNO); Oomen, T.A.E. (TU Delft Team Jan-Willem van Wingerden; Eindhoven University of Technology)","","2024","Actuators that require commutation algorithms, such as the switched reluctance motor (SRM) considered in this paper and employed in the coarse pointing assembly (CPA) for free-space optical communication, often have torque-ripple disturbances that are periodic in the commutation-angle domain that deteriorate the positioning performance. The aim of this paper is to model the torque ripple as a Gaussian Process (GP) in the commutation-angle domain and consequently compensate for it at arbitrary velocity. The approach employs repetitive control (RC) at a constant velocity. A spatial GP with a periodic kernel is trained using data that is obtained from the RC step resulting in a static non-linear function for compensation at arbitrary velocity. Stability conditions are provided for both steps. The approach is successfully applied to a CPA prototype to improve the tracking performance for laser communication, where the torque ripple is compensated at arbitrary velocity.","Gaussian process; Motion control; Optical pointing; Repetitive control; Switched reluctance motor","en","journal article","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:35c1966b-74d4-40c2-b43d-05daf70246b5","http://resolver.tudelft.nl/uuid:35c1966b-74d4-40c2-b43d-05daf70246b5","Estimating geodynamic model parameters from geodetic observations using a particle method","Marsman, C. P. (Universiteit Utrecht); Vossepoel, F.C. (TU Delft Reservoir Engineering); Van Dinther, Y. (Universiteit Utrecht); Govers, R. (Universiteit Utrecht)","","2024","Bayesian-based data assimilation methods integrate observational data into geophysical forward models to obtain the temporal evolution of an improved state vector, including its uncertainties. We explore the potential of a variant, a particle method, to estimate mechanical parameters of the overriding plate during the interseismic period. Here we assimilate vertical surface displacements into an elementary flexural model to estimate the elastic thickness of the overriding plate, and the locations and magnitudes of line loads acting on the overriding plate to produce flexure. Assimilation of synthetic observations sampled from a different forward model than is used in the particle method, reveal that synthetic seafloor data within 150 km from the trench are required to properly constrain parameters for long wavelength solutions of the upper plate (i.e. wavelength ∼500 km). Assimilation of synthetic observations sampled from the same flexural model used in the particle method shows remarkable convergence towards the true parameters with synthetic on-land data only for short to intermediate wavelength solutions (i.e. wavelengths between ∼100 and 300 km). In real-data assimilation experiments we assign representation errors due to discrepancies between our incorrect or incomplete physical model and the data. When assimilating continental data prior to the 2011 Mw Tohoku-Oki earthquake (1997-2000), an unrealistically low effective elastic plate thickness for Tohoku of ∼5-7 km is estimated. Our synthetic experiments suggest that improvements to the physical forward model, such as the inclusion of a slab, a megathrust interface and viscoelasticity of the mantle, including accurate seafloor data, and additional geodetic observations, may refine our estimates of the effective elastic plate thickness. Overall, we demonstrate the potential of using the particle method to constrain geodynamic parameters by providing constraints on parameters and corresponding uncertainty values. Using the particle method, we provide insights into the data network sensitivity and identify parameter trade-offs.","Inverse theory; Lithospheric flexure; Probabilistic forecasting; Statistical methods; Subduction zone processes; Time-series analysis","en","journal article","","","","","","","","","","","Reservoir Engineering","","",""
"uuid:5497947b-8b42-45d1-8eff-a63e3b7491ab","http://resolver.tudelft.nl/uuid:5497947b-8b42-45d1-8eff-a63e3b7491ab","Thermally self-sufficient heat pump-assisted azeotropic dividing-wall column for biofuels recovery from isopropanol-butanol-ethanol fermentation","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2024","Isopropanol-butanol-ethanol (IBE) fermentation is a superior biofuel production technology as compared to acetone-butanol-ethanol (ABE) fermentation due to the better fuel properties of the obtained products. However, low product concentrations, thermodynamic constraints and the presence of microorganisms lead to complex downstream processing that limits the competitiveness of this biofuel production method. Thus, this original research proposes a novel thermally self-sufficient and eco-efficient downstream process for industrial-scale recovery after IBE fermentation (74 ktonne/y capacity), from a highly dilute broth (>97 wt% water). Gas stripping and heat pump-assisted vacuum evaporation were implemented to separate valuable products from most of the broth. Furthermore, an advanced highly integrated heat pump-assisted azeotropic dividing-wall column was designed to recover high-purity (99 wt%) butanol biofuel and isopropanol – ethanol fuel supplement (89 wt%). The proposed purification process recovers over 99 % of biofuel products in a cost-effective (0.130 $/kgIBE) and energy-efficient way (0.673 kWeh/kgIBE) while allowing full recycle of biomass and most of the separated water. Besides improving yield by continuously recovering the inhibitory products, fermentation can be further enhanced by avoiding biomass loss and reducing water requirements. Lastly, the implemented energy-saving techniques ensure complete electrification of the proposed IBE recovery process. Therefore, the original results of this research study significantly contribute to the development of sustainable biofuel production processes.","Azeotropic dividing-wall column; Downstream processing; Heat pumps; Process electrification; Process intensification","en","journal article","","","","","","Funding Information: All persons who have made substantial contributions to the work reported in the manuscript (e.g. technical help, writing and editing assistance, general support), but who do not meet the criteria for authorship, are named in the Acknowledgements and have given us their written permission to be named. If we have not included an Acknowledgements, then that indicates that we have not received substantial contributions from non-authors. Publisher Copyright: © 2024 The Author(s)","","","","","BT/Bioprocess Engineering","","",""
"uuid:c439c2a9-f5c0-48e8-9d65-e4a02d1c2eba","http://resolver.tudelft.nl/uuid:c439c2a9-f5c0-48e8-9d65-e4a02d1c2eba","Momentum Transport in Organized Shallow Cumulus Convection","Savazzi, A.C.M. (TU Delft Atmospheric Remote Sensing); Nuijens, Louise (TU Delft Atmospheric Remote Sensing); De Rooy, Wim (Royal Netherlands Meteorological Institute (KNMI)); Janssens, M. (TU Delft Atmospheric Remote Sensing; Wageningen University & Research); Siebesma, A.P. (TU Delft Atmospheric Remote Sensing; Royal Netherlands Meteorological Institute (KNMI))","","2024","This study investigates momentum transport in shallow cumulus clouds as simulated with the Dutch Atmospheric Large Eddy Simulation (DALES) for a 150 3 150 km2 domain east of Barbados during 9 days of EUREC4A. DALES is initialized and forced with the mesoscale weather model HARMONIE-AROME and subjectively reproduces observed cloud patterns. This study examines the evolution of momentum transport, which scales contribute to it, and how they modulate the trade winds. Daily-mean momentum flux profiles show downgradient zonal momentum transport in the subcloud layer, which turns countergradient in the cloud layer. The meridional momentum transport is nontrivial, with mostly downgradient transport throughout the trade wind layer except near the top of the surface layer and near cloud tops. Substantial spatial and temporal heterogeneity in momentum flux is observed with much stronger tendencies imposed in areas of organized convection. The study finds that while scales < 2 km dominate momentum flux at 200 m in unorganized fields, submesoscales O(2-20) km carry up to 50% of the zonal momentum flux in the cloud layer in organized fields. For the meridional momentum flux, this fraction is even larger near the surface and in the subcloud layer. The scale dependence of the momentum flux is not explained by changes in convective or boundary layer depth. Instead, the results suggest the importance of spatial heterogeneity, increasing horizontal length scales, and countergradient transport in the presence of organized convection.","Convective parameterization; Convective-scale processes; Large eddy simulations; Mesoscale processes; Momentum; Subtropics","en","journal article","","","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:e6472160-3d10-4a1c-8241-759fe215e85e","http://resolver.tudelft.nl/uuid:e6472160-3d10-4a1c-8241-759fe215e85e","Improving plant-level heat pump performance through process modifications","de Raad, B.W. (TU Delft Energie and Industrie); van Lieshout, Marit (Rotterdam University of Applied Sciences); Stougie, L. (TU Delft Energie and Industrie); Ramirez, Andrea (TU Delft ChemE/Chemical Engineering)","","2024","Heat pumps are a promising option to decarbonize the industrial sector. However, their performance at a plant-level can be affected by other process changes. In this work, process changes that improve the heat pump's performance have been identified using Process Change Analysis (PCA), where the background pinch point is used as a reference point for appropriate placement. The effects of the process changes on the heat pump's work requirements are studies by introducing exergy to PCA to form the split exergy grand composite curve. This graph shows the work potential of the streams connected to the heat pump and therefore its work targets. The framework is demonstrated in two case studies. In a biodiesel production plant, it allowed to identify technologies that enhance heat pump performance while reducing overall heating requirements. Here, a heat pump transfers 1.9 MW with a COP of 4.2 but incurs a 40 kW penalty for transferring heat above the background process's pinch temperature. Replacing the wet water washer with a membrane separation unit avoided this penalty, while drastically reducing energy requirements from 0.9 MW to 0.3 MW. in a vinyl chloride monomer-purification process, PCA showed how the extraction of heat by the heat pump impacted the formation of the background pinch, from which an implementation strategy was derived that increased the heat pump's plant-level performance by 6.5% with respect to standard implementation.","Exergy grand composite curve; Heat pumps; Pinch analysis; Process change analysis","en","journal article","","","","","","","","","","ChemE/Chemical Engineering","Energie and Industrie","","",""
"uuid:7990a9cb-3423-42dc-9dd3-139c4ac259f2","http://resolver.tudelft.nl/uuid:7990a9cb-3423-42dc-9dd3-139c4ac259f2","Time-dependent earthquake-fire coupling fragility analysis under limited prior knowledge: A perspective from type-2 fuzzy probability","Men, Jinkun (South China University of Technology; Guangdong Provincial Science and Technology Collaborative Innovation Center for Work Safety; Katholieke Universiteit Leuven); Chen, Guohua (South China University of Technology; Guangdong Provincial Science and Technology Collaborative Innovation Center for Work Safety); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Katholieke Universiteit Leuven; Universiteit Antwerpen)","","2024","Earthquake-triggered fire domino scenarios (E-FDSs) arise frequently from the interaction between earthquakes and chemical installations, resulting in catastrophic multi-hazard coupling events. The complicated mutually amplified phenomena between natural disasters and chemical accidents significantly aggravates the escalation of domino accidents, which has posed great challenges for modeling and preventing E-FDSs. Under this impetus, this work proposes an advanced type-2 fuzzy probabilistic methodology to obtain the time-dependent failure probability of steel cylindrical tanks (SCTs) subjected to the earthquake-fire sequence. To cope with the limited prior knowledge on E-FDSs, a basic universal is established to describe the fire resistance attenuation caused by the seismic damage. The coupling failure criterion of SCTs is formulated by a type-2 fuzzy time-dependent limit state equation. A credibility-based stochastic simulation algorithm is developed for the hybrid uncertainty analysis (combining ambiguity and stochasticity). The proposed methodology is validated by case studies of a 5000 m3 fixed roof tank. Compared to the existing accident probability model, the proposed methodology can not only capture the fire resistance attenuation caused by the seismic damage but also provide a dynamic estimation of tank failure probability with respect to the fire exposure time. The proposed methodology can effectively and dynamically capture the accident evolution process, which in turn helps mitigate and prevent the spatiotemporal propagation of domino effects.","Chemical Industrial Parks; Chemical Process Safety; Earthquake-triggered Fire Domino Scenarios; Multi-hazard Coupling Events; Steel Cylindrical Tank; Type-2 Fuzzy Possibility Theory","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-07-06","","","Safety and Security Science","","",""
"uuid:d92ada99-8261-46c1-ac6f-de2e3feb3520","http://resolver.tudelft.nl/uuid:d92ada99-8261-46c1-ac6f-de2e3feb3520","Using sky-classification to improve the short-term prediction of irradiance with sky images and convolutional neural networks","Martinez Lopez, V.A. (TU Delft Photovoltaic Materials and Devices; TU Delft Electrical Engineering, Mathematics and Computer Science); van Urk, G.A. (TU Delft Photovoltaic Materials and Devices; TU Delft Electrical Engineering, Mathematics and Computer Science); Doodkorte, P.J.F. (TU Delft Photovoltaic Materials and Devices; TU Delft Electrical Engineering, Mathematics and Computer Science); Zeman, M. (TU Delft Photovoltaic Materials and Devices); Isabella, O. (TU Delft Photovoltaic Materials and Devices); Ziar, H. (TU Delft Photovoltaic Materials and Devices)","","2024","Clouds moving in front or away from the sun are the leading cause of irradiance variability. These variations have a repercussion on the electricity production of photovoltaic systems. Predicting such changes is essential for proper control of these systems and for maintaining grid stability. Images from the sky have proven to help with short-term solar irradiance forecasting, especially when combined with artificial intelligence. Nevertheless, these models tend to smooth the irradiance fluctuations. We propose a forecasting model to predict the clear-sky index in a forecast horizon of 20 min with a 1-minute resolution. Our model, based on a classifier to determine the sky conditions and, on an optical flow, applies an artificial intelligence model explicitly trained on each class of sky conditions. This strategy has an equivalent performance to an unclassified model and a forecast skill between 5 and 20% with respect to the smart persistence model for most classes of sky conditions while requiring considerably less training data. Although our model reduces the overall predicting error, it still has difficulties predicting irradiance changes and mainly overcast days. Our classifying strategy can be applied to other models targeting different objectives to predict sudden changes in either irradiance or power related to photovoltaic systems.","All-sky images; Deep learning; Irradiance nowcasting; Sky-image processing","en","journal article","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","Photovoltaic Materials and Devices","","",""
"uuid:39016597-6bf1-4636-89c2-96cf9cf1707e","http://resolver.tudelft.nl/uuid:39016597-6bf1-4636-89c2-96cf9cf1707e","An integrated approach to quantitative resilience assessment in process systems","Sun, H. (TU Delft Safety and Security Science; Anhui University of Technology; China University of Petroleum (East China)); Yang, M. (TU Delft Safety and Security Science); Wang, Haiqing (China University of Petroleum (East China))","","2024","Chemical process systems are becoming more automated and complex, which leads to increased interaction and interdependence between the human and technical elements of process systems. This urges the need for updating the safety assessment method by treating “safety” as an emergent property of a system. Uncertainty comes together with complexity. To enhance system ability of dealing with uncertain disruptions, this paper proposes a quantitative resilience assessment method by modeling the failure propagation (initiated by a disruption) across the functional units of a system. The Functional Resonance Analysis Method (FRAM) is utilized to model the system operation to represent the relationship among its function units and to consider the interactions among human-technical factors. Then, a Cascading Failure Propagation Model (CFPM) is developed to quantify the fault propagation process and reflect the system functionality changes over time for resilience assessment. The proposed method is applied to a propane-feeding control system. The results show that it can help practitioners understand the process of fault propagation and risk increase, identify potential ways to design a more resilient system to respond to uncertain disruptions/attacks, and provide a real-time dynamic resilience profile to support decision-making.","Human-technical systems, FRAM, resilience; Process safety","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-13","","","Safety and Security Science","","",""
"uuid:c4a44233-af1b-479d-830f-d0bef8f10fb9","http://resolver.tudelft.nl/uuid:c4a44233-af1b-479d-830f-d0bef8f10fb9","Role of the composition of humic substances formed during thermal hydrolysis process on struvite precipitation in reject water from anaerobic digestion","Pavez Jara, J.A. (TU Delft Sanitary Engineering); Iswarani, W.P. (TU Delft Water Resources; TU Delft Support Water Management; Wetsus, Centre for Sustainable Water Technology); van Lier, J.B. (TU Delft Sanitary Engineering); de Kreuk, M.K. (TU Delft Water Management)","","2024","Thermal hydrolysis process (THP) is a widely used pre-treatment method in the anaerobic digestion (AD) of waste municipal sewage sludge. A post AD dewatering step of the digestate produces a liquid stream called reject water. THP increases the concentration of humic substances (HSs) and nutrients in the produced reject water. Struvite precipitation is a widely used technique to remove and (potentially) recover PO43− -P and the corresponding amount of total ammoniacal nitrogen from reject water. The chemical characteristics of the THP-produced HSs influence reaction yields and morphology of struvite. In our current study, struvite batch precipitation experiments were conducted at different pHs, and different concentrations of HSs, consisting of either melanoidins or humic acids. Our results showed that at pH 6.5 struvite precipitation was severely retarded. However, increased concentrations of melanoidins at pH 6.5 enhanced struvite precipitation. Batch experiments conducted at pH 7.25 and 8 with increased melanoidins concentrations showed PO43−-P precipitation yields over 86 %. Humic acids negatively impacted struvite precipitation at all analysed pH values, presumably because of Mg2+ complexation. Morphological analysis showed that the presence of both HSs affected Feret diameters, aspect ratio, and cleavage pattern of struvite. Also, HSs rendered coloured crystals. Overall, our results showed that struvite precipitation is affected by HSs intrinsic characteristics, affecting yield, morphology, and colour of the formed precipitates.","Humic acid; Melanoidins; Phosphate recovery; Struvite; Thermal hydrolysis process","en","journal article","","","","","","","","","","Water Management","Sanitary Engineering","","",""
"uuid:a33618f1-35f0-4a33-8a2c-0e2657f5b40c","http://resolver.tudelft.nl/uuid:a33618f1-35f0-4a33-8a2c-0e2657f5b40c","A new Bayesian approach for managing bathing water quality at river bathing locations vulnerable to short-term pollution","Seis, W.A.A. (TU Delft Sanitary Engineering; Kompetenzzentrum Wasser Berlin); ten Veldhuis, Marie-claire (TU Delft Water Resources); Rouault, Pascale (Kompetenzzentrum Wasser Berlin); Steffelbauer, D.B. (Kompetenzzentrum Wasser Berlin); Medema, G.J. (TU Delft Sanitary Engineering; KWR Water Research Institute)","","2024","Short-term fecal pollution events are a major challenge for managing microbial safety at recreational waters. Long turn-over times of current laboratory methods for analyzing fecal indicator bacteria (FIB) delay water quality assessments. Data-driven models have been shown to be valuable approaches to enable fast water quality assessments. However, a major barrier towards the wider use of such models is the prevalent data scarcity at existing bathing waters, which questions the representativeness and thus usefulness of such datasets for model training. The present study explores the ability of five data-driven modelling approaches to predict short-term fecal pollution episodes at recreational bathing locations under data scarce situations and imbalanced datasets. The study explicitly focuses on the potential benefits of adopting an innovative modeling and risk-based assessment approach, based on state/cluster-based Bayesian updating of FIB distributions in relation to different hydrological states. The models are benchmarked against commonly applied supervised learning approaches, particularly linear regression, and random forests, as well as to a zero-model which closely resembles the current way of classifying bathing water quality in the European Union. For model-based clustering we apply a non-parametric Bayesian approach based on a Dirichlet Process Mixture Model. The study tests and demonstrates the proposed approaches at three river bathing locations in Germany, known to be influenced by short-term pollution events. At each river two modelling experiments (“longest dry period”, “sequential model training”) are performed to explore how the different modelling approaches react and adapt to scarce and uninformative training data, i.e., datasets that do not include event pollution information in terms of elevated FIB concentrations. We demonstrate that it is especially the proposed Bayesian approaches that are able to raise correct warnings in such situations (> 90 % true positive rate). The zero-model and random forest are shown to be unable to predict contamination episodes if pollution episodes are not present in the training data. Our research shows that the investigated Bayesian approaches reduce the risk of missed pollution events, thereby improving bathing water safety management. Additionally, the approaches provide a transparent solution for setting minimum data quality requirements under various conditions. The proposed approaches open the way for developing data-driven models for bathing water quality prediction against the reality that data scarcity is common problem at existing and prospective bathing waters.","Dirichlet Process Mixture Model; Probabilistic modelling; Recreational waters","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:08a5c185-f949-4acc-843b-3e7af457d8a0","http://resolver.tudelft.nl/uuid:08a5c185-f949-4acc-843b-3e7af457d8a0","Uncertainty quantification of the wall thickness and stiffness in an idealized dissected aorta","Gheysen, Lise (Universiteit Gent); Maes, Lauranne (Katholieke Universiteit Leuven); Caenen, Annette (Universiteit Gent; Katholieke Universiteit Leuven); Segers, Patrick (Universiteit Gent); Peirlinck, M. (TU Delft Medical Instruments & Bio-Inspired Technology); Famaey, Nele (Katholieke Universiteit Leuven)","","2024","Personalized treatment informed by computational models has the potential to markedly improve the outcome for patients with a type B aortic dissection. However, existing computational models of dissected walls significantly simplify the characteristic false lumen, tears and/or material behavior. Moreover, the patient-specific wall thickness and stiffness cannot be accurately captured non-invasively in clinical practice, which inevitably leads to assumptions in these wall models. It is important to evaluate the impact of the corresponding uncertainty on the predicted wall deformations and stress, which are both key outcome indicators for treatment optimization. Therefore, a physiology-inspired finite element framework was proposed to model the wall deformation and stress of a type B aortic dissection at diastolic and systolic pressure. Based on this framework, 300 finite element analyses, sampled with a Latin hypercube, were performed to assess the global uncertainty, introduced by 4 uncertain wall thickness and stiffness input parameters, on 4 displacement and stress output parameters. The specific impact of each input parameter was estimated using Gaussian process regression, as surrogate model of the finite element framework, and a δ moment-independent analysis. The global uncertainty analysis indicated minor differences between the uncertainty at diastolic and systolic pressure. For all output parameters, the 4th quartile contained the major fraction of the uncertainty. The parameter-specific uncertainty analysis elucidated that the material stiffness and relative thickness of the dissected membrane were the respective main determinants of the wall deformation and stress. The uncertainty analysis provides insight into the effect of uncertain wall thickness and stiffness parameters on the predicted deformation and stress. Moreover, it emphasizes the need for probabilistic rather than deterministic predictions for clinical decision making in aortic dissections.","Aortic dissection; Finite element analysis; Gaussian process regression; Uncertainty quantification; Vascular mechanics","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-07-03","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:01b7743d-7174-434c-a47e-ee2e705875a7","http://resolver.tudelft.nl/uuid:01b7743d-7174-434c-a47e-ee2e705875a7","Bioethanol separation by a new pass-through distillation process","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); McGregor, Ian R. (Drystill Holdings, Mississauga); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2024","Distillation is the most used separation technology at industrial-scale, but using distillation in bio-based processes (e.g. fermentation processes to produce bioethanol) is quite challenging when mild temperatures are needed to keep the microbes alive. Vacuum distillation can be used to perform evaporation at low temperatures, but setting a low distillation pressure fixes also the condensation temperature to very low values that may require expensive refrigeration. Pass-through distillation (PTD) is an emerging hybrid separation technology that effectively combines distillation with absorption in a sorption-assisted distillation process that decouples the evaporation and condensation steps. This is achieved by inserting between the evaporation and condensation steps an absorption-desorption loop that passes through the component to be separated and allows the use of different pressures and types of heating and cooling utilities. This paper is the first to present the process design and rigorous simulation (implemented in Aspen Plus) of a new pass-through distillation process for bioethanol (∼100 ktonne/y plant capacity), proving its effectiveness in concurrent alcohol recovery and fermentation (CARAF). Combining PTD with heat pumps leads to low recovery costs of 0.122 $/kgEtOH and energy requirements of only 1.723 kWthh/kgEtOH. Alternatively, combining PTD with multi-effect distillation resulted in 0.131 $/kgEtOH recovery costs and 1.834 kWthh/kgEtOH energy intensity.","Bioethanol; Distillation; Fluid separation; Industrial fermentation; Process design","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:a568ba90-4266-4e09-9f5d-5be3b52116b3","http://resolver.tudelft.nl/uuid:a568ba90-4266-4e09-9f5d-5be3b52116b3","Thermally self-sufficient process for single-step coproduction of methanol and dimethyl ether by CO2 hydrogenation","Vaquerizo, L. (TU Delft ChemE/Product and Process Engineering; University of Valladolid); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2024","Methanol and DME are highly efficient fuels and relevant building blocks that can be synthesized by CO2 hydrogenation. While several alternatives for methanol production by CO2 hydrogenation have already been developed at a commercial scale, DME production is still based on methanol dehydration. In this sense, the development of bifunctional methanol synthesis/dehydration catalysts is a clear opportunity for the simultaneous coproduction of methanol and DME in a single-step process. Although a few alternatives for DME-methanol coproduction have been proposed, either they need external fuels or refrigerants, or part of the CO2 used as raw material is purged, resulting in a loss of methanol and DME yields. This work presents a novel thermally self-sufficient process that hydrogenates CO2 into methanol and DME in a single reactor at 100 % yield (only water as a byproduct at 0.94 kgwater/kgproduct), that only consumes air, cooling water (0.006 m3 water/kgproducts) and electricity (net CO2 emissions of −1.20 or 0.64 kgCO2eq/kgproducts when the plant is operated with green or grey electricity, respectively). The innovative design, based on the combination of a top-divided wall column, an integrated heat network, and limited pressure drop in the reaction-separation loop, results in a thermally self-sufficient process that uses only 0.76 kWh per kg products.","Dividing-wall column; Dual catalyst; Energy efficiency; Process design; Process integration","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:bfd0d299-b573-4a1d-aab8-a085d9284d87","http://resolver.tudelft.nl/uuid:bfd0d299-b573-4a1d-aab8-a085d9284d87","Blind Polarization Demultiplexing of Shaped QAM Signals Assisted by Temporal Correlations","Bajaj, V. (TU Delft Team Sander Wahls); Van de Plas, Raf (TU Delft Team Raf Van de Plas; VanderBilt University); Wahls, S. (Karlsruhe Institut für Technologie)","","2024","While probabilistic constellation shaping (PCS) enables rate and reach adaption with finer granularity [1] (Cho and Winzer, 2009), it imposes signal processing challenges at the receiver. Since the distribution of PCS-quadrature amplitude modulation (QAM) signals tends to be Gaussian, conventional blind polarization demultiplexing algorithms are not suitable for them [2] (Johnson et al., 1998). It is known that independently and identically distributed (iid) Gaussian signals, when mixed, cannot be recovered/separated from their mixture. For PCS-QAM signals, there are algorithms such as [3] and [4] Dris et al. (2019) and Athuraliya et al. (2004) which are designed by extending conventional blind algorithms used for uniform QAM signals. In these algorithms, an initialization point is obtained by processing only a part of the mixed signal, which have non-Gaussian statistics. In this article, we propose an alternative method wherein we add temporal correlations at the transmitter, which are subsequently exploited at the receiver in order to separate the polarizations. We will refer to the proposed method as frequency domain (FD) joint diagonalization (JD) probability aware-multi modulus algorithm (pr-MMA), and it is suited to channels with moderate polarization mode dispersion (PMD) effects. Furthermore, we extend our previously proposed JD-MMA [5] (Bajaj et al., 2022) by replacing the standard MMA with a pr-MMA, improving its performance. Both FDJD-pr-MMA and JD-pr-MMA are evaluated for a diverse range of PCS (entropy $\mathcal {H}$) of 64-QAM over a first-order PMD channel that is simulated in a proof-of-concept setup. A MMA initialized with a memoryless constant modulus algorithm (CMA) is used as a benchmark. We show that at a differential group delay (DGD) of 10% of symbol period T$_{\text{symb}}$ and 18 dB SNR/pol., JD-pr-MMA successfully demultiplexes the PCS signals, while CMA-MMA fails drastically. Furthermore, we demonstrate that the newly proposed FDJD-pr-MMA is robust against moderate PMD effects by evaluating it over a DGD of up to 40% of T$_{\text{symb}}$. Our results show that the proposed FDJD-pr-MMA successfully equalizes PMD channels with a DGD up to 20% of T$_{\text{symb}}$.","and optical fiber communication; Correlation; Demultiplexing; digital signal processing; Optical fiber dispersion; Polarization demultiplexing; probabilistic constellation shaping; Programmable logic arrays; Quadrature amplitude modulation; Signal processing algorithms; Symbols","en","journal article","","","","","","","","2024-03-14","","","Team Sander Wahls","","",""
"uuid:a7582c0e-bbf1-4450-af87-522f72b40123","http://resolver.tudelft.nl/uuid:a7582c0e-bbf1-4450-af87-522f72b40123","Digital twin in high throughput chromatographic process development for monoclonal antibodies","Picanço Castanheira Da Silva, T. (TU Delft BT/Bioprocess Engineering); Eppink, M.H.M. (Wageningen University & Research); Ottens, M. (TU Delft BT/Design and Engineering Education)","","2024","The monoclonal antibody (mAb) industry is becoming increasingly digitalized. Digital twins are becoming increasingly important to test or validate processes before manufacturing. High-Throughput Process Development (HTPD) has been progressively used as a tool for process development and innovation. The combination of High-Throughput Screening with fast computational methods allows to study processes in-silico in a fast and efficient manner. This paper presents a hybrid approach for HTPD where equal importance is given to experimental, computational and decision-making stages. Equilibrium adsorption isotherms of 13 protein A and 16 Cation-Exchange resins were determined with pure mAb. The influence of other components in the clarified cell culture supernatant (harvest) has been under-investigated. This work contributes with a methodology for the study of equilibrium adsorption of mAb in harvest to different protein A resins and compares the adsorption behavior with the pure sample experiments. Column chromatography was modelled using a Lumped Kinetic Model, with an overall mass transfer coefficient parameter (kov). The screening results showed that the harvest solution had virtually no influence on the adsorption behavior of mAb to the different protein A resins tested. kov was found to have a linear correlation with the sample feed concentration, which is in line with mass transfer theory. The hybrid approach for HTPD presented highlights the roles of the computational, experimental, and decision-making stages in process development, and how it can be implemented to develop a chromatographic process. The proposed white-box digital twin helps to accelerate chromatographic process development.","Harvest High-throughput screening; High-throughput process development; Lumped kinetic model; Overall mass transfer coefficient","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:901f5688-0010-4467-930f-69f6596e45b4","http://resolver.tudelft.nl/uuid:901f5688-0010-4467-930f-69f6596e45b4","Capturing experts’ knowledge in heritage planning enhanced by AI: A case study of windcatchers in Yazd, Iran","Foroughi, M. (TU Delft Heritage & Architecture); de Andrade, Bruno (TU Delft Heritage & Architecture); Pereira Roders, A. (TU Delft Heritage & Architecture)","","2024","Experts have always played an important role in heritage planning, practice, and theory. There is a wealth of literature published every year regarding heritage and its cultural significance. Experts also contribute to heritage planning and developing policy documents. Still, literature is rarely used as a source of primary research to systematically reveal and compare experts’ opinions on the cultural significance of built heritage. Analyzing them as a whole is costly and time-consuming, especially on built heritage, when much has been written about. While the automation of methods has proven to mitigate such restrictions in other fields, as digital humanities, their application in heritage planning, practice, and theory is still scarce. Hence, this paper aims to investigate the potentials of AI models (e.g., multi label text classification) in analyzing scientific documents, revealing the cultural significance of built heritage, values and attributes. This was done to better understand the similarities and differences between the experts’ opinions. Yazd, Iran, is taken a case study, with a particular focus on windcatchers, a key attribute conveying cultural significance, of outstanding universal value, due to its inscription on the UNESCO World Heritage List. This paper has three subsequent phases: 1) state of the art on the application of AI in heritage planning; 2) methodology of data collection and data analysis related to coding values and attributes of windcatchers, addressed in relevant documents; 3) preliminary findings on the experts’ opinions over values and attributes of windcatchers. Results contribute to the scientific discussion, revealing the cultural significance of windcatchers of Yazd from experts’ point of view. Besides, the potential of AI for heritage planning is revealed in terms of (de)coding and measuring the cultural significance of built heritage from the available documents, showing the opinions of experts with various backgrounds. This model can be applied to other key attributes in Yazd and other case studies and scales to support heritage planning, practice, and theory.","Attribute; Cultural significance; Expert; Natural language processing; Text classification; Value","en","journal article","","","","","","","","","","","Heritage & Architecture","","",""
"uuid:14fe9f4a-1c21-452e-95ab-51d164e05619","http://resolver.tudelft.nl/uuid:14fe9f4a-1c21-452e-95ab-51d164e05619","Diversifying Knowledge Production in HCI: Exploring Materiality and Novel Formats for Scholarly Expression","Sturdee, Miriam (University of St Andrews); Genç, H.U. (TU Delft Human-Centred Artificial Intelligence); Wanick, Vanissa (University of Southampton)","","2024","This one-day studio aims to catalyze discussions and experimentation around non-textual academic documentation methods. With the understanding that human knowledge transcends written words, we aim to explore innovative ways to present and disseminate research outputs in diverse forms and of varying materiality. By bringing together researchers, practitioners, and academics from different disciplines and backgrounds, we seek to challenge the status quo of textual output and envision a future where knowledge production embraces the multisensory nature of human data.","futuring; knowledge production; pictorials; process; research output; tangible","en","conference paper","Association for Computing Machinery (ACM)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-08-11","","","Human-Centred Artificial Intelligence","","",""
"uuid:7283ac18-a8fa-4a90-966b-30999a3ee918","http://resolver.tudelft.nl/uuid:7283ac18-a8fa-4a90-966b-30999a3ee918","Risk assessment methods for process safety, process security and resilience in the chemical process industry: A thorough literature review","Bin Ab Rahim, M.S. (TU Delft Safety and Security Science; Ministry of Human Resources); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven); Yang, M. (TU Delft Safety and Security Science; Universiti Teknologi Malaysia; University of Tasmania); Bajpai, Shailendra (Dr B.R. Ambedkar National Institute of Technology)","","2024","This paper presents a systematic literature review of risk assessment methods in the chemical process industry (CPI), focusing on process safety, process security, and resilience. We analyzed peer-reviewed articles from 2000 to 2022 using the PRISMA methodology and identified twelve predominant methods. Our findings reveal a shift towards dynamic, systemic-based assessments like the Functional Resonance Analysis Method (FRAM) and System-Theoretic Accident Model and Processes (STAMP). These methods are particularly effective at capturing the complexities of sociotechnical systems in the CPI. However, a significant observation from our review is the limited emphasis on the resilience paradigm within many existing methods when addressing both process safety and process security risks, which is crucial for preventing and recovering from disruptions. Given the evolving challenges in system safety and security threats, there is an urgent need for holistic methods that integrate process safety, process security, and resilience. Our review highlights the opportunity for further research to better prepare the industry for future challenges, ensuring safer, more secure, reliable, and resilient operations.","Chemical process industry; Process safety; Process security; Resilience; Risk assessment; Sociotechnical systems","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:19b80c41-ef6f-42c0-9a81-2c600c9dd53f","http://resolver.tudelft.nl/uuid:19b80c41-ef6f-42c0-9a81-2c600c9dd53f","Large deviations for Markov processes with switching and homogenisation via Hamilton–Jacobi–Bellman equations","Della Corte, S. (TU Delft Applied Probability); Kraaij, R.C. (TU Delft Applied Probability)","","2024","We consider the context of molecular motors modelled by a diffusion process driven by the gradient of a weakly periodic potential that depends on an internal degree of freedom. The switch of the internal state, that can freely be interpreted as a molecular switch, is modelled as a Markov jump process that depends on the location of the motor. Rescaling space and time, the limit of the trajectory of the diffusion process homogenises over the periodic potential as well as over the internal degree of freedom. Around the homogenised limit, we prove the large deviation principle of trajectories with a method developed by Feng and Kurtz based on the analysis of an associated Hamilton–Jacobi–Bellman equation with an Hamiltonian that here, as an innovative fact, depends on both position and momenta.","Large deviations; Switching Markov process; Hamilton–Jacobi equation; Viscosity solutions; Comparison principle","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:c1639160-332a-4289-9f2d-2e6014a8ed57","http://resolver.tudelft.nl/uuid:c1639160-332a-4289-9f2d-2e6014a8ed57","Organizational learning from construction fatalities: Balancing juridical, ethical, and operational processes","van Marrewijk, A.H. (TU Delft Design & Construction Management; BI Norwegian Business School; Vrije Universiteit Amsterdam); van der Steen, Hans (Gebr. van der Steen)","","2024","Construction work is associated with high risks of fatalities. Effective, deep and lasting learning from incidents is important for the safety of employees, but not well developed in the construction sector. We studied the organizational processes after a fatality through an auto-ethnographic field work study and found three distinct, but interrelated processes to normalize construction work; juridical, ethical and operational processes. Balanced attention to all three processes supports an effective, deep and lasting learning from incidents. We contribute to the learning from incidents literature with the insight that balanced attention for all three processes helps to learn from incidents and to improve the safety of workers. Furthermore, second victims can be important for the learning of incidents process. Finally, the findings throw new light on inadequate supervision of safety procedures, as the temporary characteristics of projects forces workers to deviate from safety procedures.","Construction fatality; Learning from incidents; Organizational processes; Safety; Second victims","en","journal article","","","","","","","","","","","Design & Construction Management","","",""
"uuid:afdc3565-30a8-4a0a-8bd6-5e67627b05d4","http://resolver.tudelft.nl/uuid:afdc3565-30a8-4a0a-8bd6-5e67627b05d4","Multipath Exploitation for Human Activity Recognition using a Radar Network","Guendel, R.G. (TU Delft Microwave Sensing, Signals & Systems); Kruse, N.C. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2024","In this study, the problem of multipath in radar sensor networks for human activity recognition (HAR) has been examined. Traditionally considered as a source of additional clutter, the multipath is being investigated for its potential to be exploited through the creation of virtual radar nodes. These virtual nodes are conceptualized to observe targets from aspect angles that differ from those of physically existing radars. To realize this idea, an innovative processing pipeline is proposed that extracts information from multipath signals to improve HAR. The pipeline isolates and tracks the line-of-sight (LOS) and multipath components of a moving human target performing continuous sequences of activities observed by a network of three radar sensors. Furthermore, the method has been verified with experimental data consisting of six activities and 14 volunteers by comparing classification metrics with the use of a single radar as well as only the LOS components of the three radars in the network. A 12-layer convolutional neural network (CNN) classifier has been designed to operate on range-Doppler (RD) images derived from the LOS and multipath components, extracted by the proposed method. A substantial performance improvement using the leave-one-person-out (L1Po) test set is demonstrated in the order of +11% by exploiting a multiradar network with its LOS and multipath components.","radar signal processing; radar multipath; multipath; human activity recognition; distributed radar; hierarchical clustering; clustering; multilateration; trilateration","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-08-16","","","Microwave Sensing, Signals & Systems","","",""
"uuid:e7aa213f-4a07-4c77-a8a7-cadbed12aca9","http://resolver.tudelft.nl/uuid:e7aa213f-4a07-4c77-a8a7-cadbed12aca9","Condition-Based Maintenance scheduling of an aircraft fleet under partial observability: A Deep Reinforcement Learning approach","Tseremoglou, I. (TU Delft Air Transport & Operations); Santos, Bruno F. (TU Delft Air Transport & Operations)","","2024","In the Condition-Based Maintenance (CBM) context, the definition of optimal maintenance plans for an aircraft fleet depends on an efficient integration of : (i) the probabilistic predictions of the health condition of the components and (ii) the stochastic arrival of the corrective maintenance tasks, together with consideration of the preventive maintenance tasks as defined in the Maintenance Planning Document (MPD). To this end, in this paper, we present a two-stage dynamic scheduling framework to solve the aircraft fleet maintenance scheduling problem under a CBM strategy in a disruptive environment. In the first stage of the framework, we address the uncertainty in the predicted health state of the monitored components by planning the optimal maintenance policy based upon the belief state-space of the health of the components. The decision-making process is formulated as a Partially Observable Markov Decision Process (POMDP) and is solved using the Partially Observable Monte Carlo Planning (POMCP) algorithm, considering the aircraft maintenance scheduling problem requirements. In the second stage, a Deep Q-Network (DQN) is developed, that integrates the defined maintenance policy of the monitored components within the scheduling of the aircraft fleet's preventive and corrective maintenance tasks. Our model, through a rolling horizon approach, continuously creates and adjusts the maintenance schedule, reacting to new updated task information, where the availability of maintenance resources constraints the execution of each task. The proposed framework was tested on a case study from a large airline and the performance was evaluated against the current state practice of the airline. The results show that our model can schedule 96.4% of monitored components on-time. As a consequence of this, a 46.2% maintenance cost reduction is achieved for the considered monitored components relative to a corrective maintenance approach.","(POMDP); Condition-Based Maintenance (CBM); Deep Reinforcement Learning; Partially Observable Markov Decision Process; Partially Observable Monte–Carlo Planning (POMCP); Planning under uncertainty; Prognostics","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:a48c1478-abfd-4fa1-86b0-0b1ddff9a3b6","http://resolver.tudelft.nl/uuid:a48c1478-abfd-4fa1-86b0-0b1ddff9a3b6","Strong invariance principles for ergodic Markov processes","Pengel, A.L. (TU Delft Statistics); Bierkens, G.N.J.C. (TU Delft Statistics)","","2024","Strong invariance principles describe the error term of a Brownian approximation to the partial sums of a stochastic process. While these strong approximation results have many applications, results for continuous-time settings have been limited. In this paper, we obtain strong invariance principles for a broad class of ergodic Markov processes. Strong invariance principles provide a unified framework for analysing commonly used estimators of the asymptotic variance in settings with a dependence structure. We demonstrate how this can be used to analyse the batch means method for simulation output of Piecewise Deterministic Monte Carlo samplers. We also derive a fluctuation result for additive functionals of ergodic diffusions using our strong approximation results.","asymptotic variance estimation; piecewise deterministic Markov processes; Strong invariance principle","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:b2d6fbf3-30ca-4275-94fe-563842330707","http://resolver.tudelft.nl/uuid:b2d6fbf3-30ca-4275-94fe-563842330707","Beyond Failure and Success: A Process View on Imperfect Projects as Common Practice","van Marrewijk, A.H. (TU Delft Design & Construction Management; BI Norwegian Business School; Vrije Universiteit Amsterdam); Stjerne, Iben (Technical University of Denmark); Sydow, Jörg (Freie Universität Berlin)","","2024","This editorial scrutinizes the dichotomy of a project’s success and failure, which, in our opinion is too rigid, inflexible, and unnuanced. The aim of this special issue is to nuance this dichotomy by moving toward a process view on how imperfection is brought about in projects. We introduce and discuss five topics important for such a process view: (1) improvisation, (2) temporality, (3) power and politics, (4) transition, and (5) intentionality. We argue that a holistic, processual view of imperfections premises emergence and continuous learning and judgments of the project both in and over time. All five articles in this special issue deal with at least one of the discussed themes of our proposed process view on imperfect projects.","failure; imperfect project; learning; process; success","en","contribution to periodical","","","","","","","","","","","Design & Construction Management","","",""
"uuid:2cc582aa-c13c-45b2-8769-09a68d2e1209","http://resolver.tudelft.nl/uuid:2cc582aa-c13c-45b2-8769-09a68d2e1209","Testing Stationarity and Statistical Independence of Multistatic/Polarimetric Sea-Clutter with Application to NetRAD Data","Aubry, Augusto (Università degli Studi di Napoli Federico II); Carotenuto, Vincenzo (Università degli Studi di Napoli Federico II); Maio, Antonio De (Università degli Studi di Napoli Federico II); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2024","The design of bespoke adaptive detection schemes relying on the joint use of multistatic/polarimetric measurements requires a preliminary statistical inference on the clutter interference environment. This is of paramount importance to develop an analytic model for the received signal samples, which is mandatory for the synthesis of radar detectors. In this respect, the aim of this article is the development of suitable learning tools to study some important statistical features of the sea-clutter environment perceived at the nodes of a multistatic/polarimetric radar system. Precisely, the stationarity of the data in the slow-time domain is first assessed by resorting to generalized inner product (GIP) based statistics. Then, the possible presence of structural symmetries in the clutter covariance matrices is investigated. Finally, relationships between some statistical parameters characterizing the sea-clutter returns on the bistatic polarimetric channels are explored via specific sequential hypothesis testing. This research activity is complemented by the use of radar returns measured via the netted RADar (NetRAD), which collects simultaneously monostatic and bistatic polarimetric measurements. The results indicate that the analyzed data can be modeled as drawn from a stationary Gaussian process within the coherence time. In addition, the bistatic returns on the different polarimetric channels can be assumed statistically independent with speckle components possibly exhibiting proportional/equal covariance matrices depending on the transmit/receive polarization and bistatic geometry.","Spherically Invariant Random Process (SIRP); sea-clutter; multistatic/polarimetric radar; Generalized Inner Product (GIP); data homogeneity; covariance matrix structure; Model Order Selection (MOS); proportionality/equality of covariance matrices","en","journal article","","","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:d61a6e0a-cc06-486b-a78d-dad20e686e53","http://resolver.tudelft.nl/uuid:d61a6e0a-cc06-486b-a78d-dad20e686e53","Accelerating Large-Scale Graph Processing with FPGAs: Lesson Learned and Future Directions","Procaccini, Marco (University of Siena); Sahebi, Amin (University of Siena); Barbone, Marco (Imperial College London); Luk, Wayne (Imperial College London); Gaydadjiev, G. (TU Delft Quantum Circuit Architectures and Technology); Giorgi, Roberto (University of Siena)","Bispo, Joao (editor); Xydis, Sotirios (editor); Curzel, Serena (editor); Sousa, Luis Miguel (editor)","2024","Processing graphs on a large scale presents a range of difficulties, including irregular memory access patterns, device memory limitations, and the need for effective partitioning in distributed systems, all of which can lead to performance problems on traditional architectures such as CPUs and GPUs. To address these challenges, recent research emphasizes the use of Field-Programmable Gate Arrays (FPGAs) within distributed frameworks, harnessing the power of FPGAs in a distributed environment for accelerated graph processing. This paper examines the effectiveness of a multi-FPGA distributed architecture in combination with a partitioning system to improve data locality and reduce inter-partition communication. Utilizing Hadoop at a higher level, the framework maps the graph to the hardware, efficiently distributing pre-processed data to FPGAs. The FPGA processing engine, integrated into a cluster framework, optimizes data transfers, using offline partitioning for large-scale graph distribution. A first evaluation of the framework is based on the popular PageRank algorithm, which assigns a value to each node in a graph based on its importance. In the realm of large-scale graphs, the single FPGA solution outperformed the GPU solution that were restricted by memory capacity and surpassing CPU speedup by 26x compared to 12x. Moreover, when a single FPGA device was limited due to the size of the graph, our performance model showed that a distributed system with multiple FPGAs could increase performance by around 12x. This highlights the effectiveness of our solution for handling large datasets that surpass on-chip memory restrictions.","Accelerators; Distributed computing; FPGA; Graph processing; Grid partitioning","en","conference paper","Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing","","","","","","","","","","Quantum Circuit Architectures and Technology","","",""
"uuid:69a47953-3354-4bee-866a-c43cf3cd4154","http://resolver.tudelft.nl/uuid:69a47953-3354-4bee-866a-c43cf3cd4154","A Review of Climate and Resident-Oriented Renovation Processes: A Framework for Just Decision Support Systems","Ricci, Diletta (TU Delft Design & Construction Management); Konstantinou, T. (TU Delft Architectural Technology); Visscher, H.J. (TU Delft Design & Construction Management)","Littlewood, John R. (editor); Jain, Lakhmi (editor); Howlett, Robert J. (editor)","2024","The renovation of existing buildings is widely recognized as a powerful strategy for reducing emissions and land use. However, when it comes to residential buildings, the socio-technical challenges are particularly complex. The necessity and urgency of increasing energy efficiency often lead to retrofit processes that overlook residents’ needs and fail to consider the impact of renovation techniques on their lives. This study conducts a systematic and interdisciplinary literature review to explore how and to what extent social aspects, particularly residents and their needs, are considered in building renovations. An analysis of 40 studies from the Web of Science and Scopus databases is presented. The holistic overview focuses on two interrelated aspects: the orientation of decision-making processes towards residents and social components of multi-stakeholder involvement, and the relationship and interaction between design choices and residents. By doing so, the review enables a collection of meaningful and heterogeneous criteria for process management and retrofit solutions selection. Recognizing the existing gaps in the literature and clarifying relevant criteria, this review can help identify areas that require further research and intervention.","Renovation processes; End-users; Decision-making; Technology adoption; Systematic review","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-09-07","","","Design & Construction Management","","",""
"uuid:6b6f382c-8d3d-4147-b961-bfbb752ab29c","http://resolver.tudelft.nl/uuid:6b6f382c-8d3d-4147-b961-bfbb752ab29c","Influence of mixing time on a reversal tolerant anode measured ex situ for a PEMFC","Homan, S.J.T. (TU Delft ChemE/Catalysis Engineering; cellcentric GmbH & Co. KG); Aylar, K. (cellcentric GmbH & Co. KG); Jurjevic, A. (cellcentric GmbH & Co. KG); Scolari, M. (cellcentric GmbH & Co. KG); Urakawa, A. (TU Delft ChemE/Catalysis Engineering); Taheri, P. (TU Delft Team Peyman Taheri)","","2024","When no hydrogen can reach the Pt catalyst in the anode for the hydrogen oxidation reaction (HOR) of an operating proton exchange membrane fuel cell (PEMFC), the anode potential increases and causes the cell potential to be reversed compared to normal operation conditions. During this reversal, the oxygen evolution reaction (OER) and carbon oxidation reaction (COR) will occur at the anode, where the COR has devastating consequences for the electrode. Introducing an OER catalyst limits the COR to occur, which makes a reversal tolerant anode (RTA). In this research, RTAs are differentiated by applying different ball milling times during catalyst layer processing, forming big and small OER (IrOx/TiOx) and HOR (Pt/C) catalyst particles. The two different particle sizes were electrochemically tested using a rotating disc electrode (RDE). Both catalyst sizes show a decrease in OER activity (mA cm−2) accompanied by loss of the ionomer in a self-developed accelerated stress test (AST). The small particle RTAs show higher OER activity as a result of increased surface area. However, during a chronopotentiometry measurement, which mimics a fuel cell reversal, the small particle coatings show a worse reversal tolerance. This phenomenon can be attributed to the increased difficulty in removing oxygen bubbles.","Catalyst layer processing; OER catalyst; PEMFC; Reversal tolerant anode (RTA); Rotating disc electrode (RDE)","en","journal article","","","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:910e5bb9-6bb1-4306-9d9b-bca5eb95f564","http://resolver.tudelft.nl/uuid:910e5bb9-6bb1-4306-9d9b-bca5eb95f564","Single-crystal vs polycrystalline boron-doped diamond anodes: Comparing degradation efficiencies of carbamazepine in electrochemical water treatment","Feijoo, Sara (Katholieke Universiteit Leuven); Baluchová, S. (TU Delft Micro and Nano Engineering; Charles University); Kamali, Mohammadreza (Katholieke Universiteit Leuven); Buijnsters, J.G. (TU Delft Micro and Nano Engineering); Dewil, Raf (Katholieke Universiteit Leuven; University of Oxford)","","2024","The ongoing challenge of water pollution by contaminants of emerging concern calls for more effective wastewater treatment to prevent harmful side effects to the environment and human health. To this end, this study explored for the first time the implementation of single-crystal boron-doped diamond (BDD) anodes in electrochemical wastewater treatment, which stand out from the conventional polycrystalline BDD morphologies widely reported in the literature. The single-crystal BDD presented a pure diamond (sp3) content, whereas the three other investigated polycrystalline BDD electrodes displayed various properties in terms of boron doping, sp3/sp2 content, microstructure, and roughness. The effects of other process conditions, such as applied current density and anolyte concentration, were simultaneously investigated using carbamazepine (CBZ) as a representative target pollutant. The Taguchi method was applied to elucidate the optimal operating conditions that maximised either (i) the CBZ degradation rate constant (enhanced through hydroxyl radicals (•OH)) or (ii) the proportion of sulfate radicals (SO4•−) with respect to •OH. The results showed that the single-crystal BDD significantly promoted •OH formation but also that the interactions between boron doping, current density and anolyte concentration determined the underlying degradation mechanisms. Therefore, this study demonstrated that characterising the BDD material and understanding its interactions with other process operating conditions prior to degradation experiments is a crucial step to attain the optimisation of any wastewater treatment application.","boron-doped diamond (BDD); Electrochemical advanced oxidation processes (eAOPs); polycrystalline; single-crystal; wastewater treatment","en","journal article","","","","","","","","","","","Micro and Nano Engineering","","",""
"uuid:b30543c0-7c0e-4cb2-8a3f-5c784032da32","http://resolver.tudelft.nl/uuid:b30543c0-7c0e-4cb2-8a3f-5c784032da32","Introduction: Shock chains and parallel shocks: Towards a social science of the recovery society","Bryson, John R. (University of Birmingham); Andres, Lauren (University College London (UCL)); Ersoy, A. (TU Delft Urban Development Management); Reardon, Louise (University of Birmingham)","Andres, Lauren (editor); Bryson, John R. (editor); Ersoy, Aksel (editor); Reardon, Louise (editor)","2024","Any one shock is never isolated from other shocks and any one recovery process will be complicated by further related and unrelated shocks and their related recovery processes. This chapter highlights the interactions that occur between shocks that are experienced in parallel or simultaneously and those that occur linearly and take the form of shock chains. These shock processes suggest that there needs to be further social science research on the complexity of shock and related recovery processes, to contribute to academic debate, but also to inform practice, policy development, and implementation. There needs to be a new social science research agenda on characterizing the features of the recovery society. A key issue is that there are many alternative recovery pathways and that each emerges through a set of iterative relationships between people, place, organisations, institutions, and governance processes. These alternatives reflect path dependency and previous decisions and related investments but are complicated by place-based intersectionality that compounds the ways in which parallel shocks and shock chains, and related recovery processes, interact with one another forming highly contextualised shock-related impacts and which then mediate the impacts of recovery processes in practice.","shocks; recovery processes; shock chains; parallel shocks; recovery society; social order","en","book chapter","Edward Elgar Publishing","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-07-12","","","Urban Development Management","","",""
"uuid:888a8ad0-fd2f-4d82-86ed-dfb7f0397d2a","http://resolver.tudelft.nl/uuid:888a8ad0-fd2f-4d82-86ed-dfb7f0397d2a","Challenges and opportunities for process intensification in Europe from a process systems engineering perspective","Li, Q. (TU Delft ChemE/Process Systems Engineering); Somoza Tornos, A. (TU Delft ChemE/Process Systems Engineering); Grievink, J. (TU Delft ChemE/Product and Process Engineering); Kiss, A.A. (TU Delft ChemE/Process Systems Engineering)","","2024","Process Intensification (PI) is an effective way to enhance process efficiency and sustainability at affordable costs and efforts, attracting particular interest in the European area, as one of the most important chemical production areas in the world. PI primarily contributes by developing and testing new processing technologies that once integrated within a process improve the overall process performance substantially but as a result, it may alter the overall process (flowsheet) structure and its dynamic behavior. As such PI plays a key role in improving energy efficiency, optimizing resource allocation, and reducing environmental impact of industrial processes, and thereby leading to a cost-effective, eco-efficient, low-carbon and sustainable industry. However, along with opportunities, the PI new technologies have challenges related to failures in longer-term performance. In this respect, Process Systems Engineering (PSE) stance is more on integration aspects of new PI technologies into processes by making process (re)designs, doing operability studies, and performance optimizations within a supply chain setting. PSE contributes to overcoming the challenges by providing systematic approaches for the design and optimization of PI technologies. This perspective paper is a lightly referenced scholarly opinion piece about the status and directions of process intensification field from a PSE viewpoint. Primarily, it focuses on PSE perspectives towards sustainable lower energy usage process systems and provides a brief overview of the current situation in Europe. It also emphasizes the key challenges and opportunities for (new) PI technologies considering their integration in a process in terms of process synthesis and design, process flowsheet optimization, process and plantwide control, (green) electrification, sustainability improvements. Potential research directions on these aspects are given from an industrial and academic perspective of the authors.","ecoefficiency; energy efficiency; fluid separation; process intensification; process systems engineering","en","journal article","","","","","","","","","","","ChemE/Process Systems Engineering","","",""
"uuid:c31d254c-045c-4f4d-bd82-0d24ef8d48fa","http://resolver.tudelft.nl/uuid:c31d254c-045c-4f4d-bd82-0d24ef8d48fa","Digitally Intensive Frequency Synthesis and Modulation Exploiting a Time-mode Arithmetic Unit","Gao, Z. (TU Delft Electronics)","Babaie, M. (promotor); Staszewski, R.B. (promotor); Delft University of Technology (degree granting institution)","2023","Reducing power consumption is becoming increasingly important for the sustainability of the communication industry because it is expected to consume a significant portion of the global electricity in the face of the exponentially increasing demands on the volume and rate of data transmission. As the scope narrows to the individual wireless device level, the reduced power consumption helps to extend the lifetime of battery-powered devices, thereby leading to improved user experience and enabling the development of innovative applications. The quest for the lower power consumption will profoundly shape the wireless transceiver design, i.e., each critical block in the system should constantly reduce its drained power without sacrificing the performance. With this background, the thesis focuses on the phase-locked loops (PLL) that generate RF clocks for wireless transceivers, and develops low-power techniques suppressing the fractional-spur levels when the PLL generates unmodulated carrier, and the phase modulation (PM) error when the PLL additionally serves as a two-point modulator...","time-mode arithmetic unit (TAU); digital-to-time converter (DTC); phase-locked loop (PLL); fractional spur; process voltage and temperature (PVT); spur cancelation; self-interference; synchronous interference; interference mitigation; PLL-based modulator; phase modulator; two-point modulation; non-uniform clock compensation (NUCC); phase-domain digital pre-distortion (DPD); LC-tank nonlinearity","en","doctoral thesis","","978-94-6366-779-1","","","","","","2024-12-07","","","Electronics","","",""
"uuid:a152e57f-1c21-4941-b298-55f7b133e2e4","http://resolver.tudelft.nl/uuid:a152e57f-1c21-4941-b298-55f7b133e2e4","Modeling the anaerobic fermentation of CO, H2 and CO2 mixtures at large and micro-scales","Almeida Benalcazar, E.F. (TU Delft BT/Bioprocess Engineering)","Noorman, H.J. (promotor); Maciel Filho, Rubens (promotor); Posada Duque, J.A. (copromotor); Delft University of Technology (degree granting institution); Unicamp, Campinas (degree granting institution)","2023","The mitigation of global warming requires an urgent shift from the fossil fuel-based productive matrix currently in place. Technological platforms are being developed to reduce the amount of carbon of fossil origin, which is emitted to the atmosphere as a side-product from the production of energy. Gas mixtures containing CO, H2 and CO2 are candidates to drive the replacement of such fossil carbon. Each component in the gas mixture called synthesis gas (syngas) can be produced using renewable energy and the carbon from renewable materials, such as lignocellulose, biogas or municipal solid wastes. The production of chemicals from the gas mixtures can be done through the mature thermochemical conversion or through fermentation, a technology still under development. The metabolism of syngas-fermenting microorganisms and their behavior inside large-scale bioreactors are still not well understood...","syngas fermentation; ethanol; Metabolic modeling; Process simulation; life-cycle assessment","en","doctoral thesis","","978-94-6384-495-6","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:d2e1100d-af4e-4af7-8124-41ca5fa881c1","http://resolver.tudelft.nl/uuid:d2e1100d-af4e-4af7-8124-41ca5fa881c1","Classification of Human Activities with Distributed Radar Systems","Guendel, Ronny (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Fioranelli, F. (promotor); Delft University of Technology (degree granting institution)","2023","This thesis introduces the relevance of radar systems in the realm of human activity recognition (HAR) in Chapter 1. The study touches upon the complex understanding of continuous human activities and the existing challenges and gaps in current methodologies, hinting at the innovative technical approaches that are to be detailed in the following chapters.
The technical foundation of the research is given in Chapter 2 by introducing distributed ultrawideband (UWB) radar systems. These systems, especially when spatially distributed, bring a depth of information by integrating data from multiple radar nodes and spatial perspectives. There is a significant emphasis on how different fusion techniques, both late and early, play a crucial role in harnessing data effectively, particularly in the context of HAR.
A critical contribution in the study is the potential to deviate from conventional radar data domains, such as microDoppler spectrograms for activity recognition. The research in Chapter 3 highlights an alternative approach, rooted in the radar phase information from a highresolution rangetime map, which bypasses the limitations of common FFTbased radar data domains. This methodology, paired with the histogram of oriented gradients (HOG) algorithm, showcases promising results that can be particularly interesting for realtime applications with computational constraints.
The research in Chapter 4 underlines the efficacy of employing a network of spatially distributed UWB radars for continuous HAR. These networks address the downsides of using a single sensor, like unfavorable aspectangle observations. The study delves into fusion methodologies and their implementation in classifying activities, particularly using recurrent neural networks. To assess these continuous recognition systems, novel evaluation metrics are proposed, offering a deeper insight into the practicality and effectiveness of such systems with temporal classification capabilities.
Indoor radar networks often face multipath challenges. The study in Chapter 5 not only identifies this challenge, but also uses the multipath components by leveraging these typically unwanted phenomena to enhance classification capabilities. Through a pipeline that isolates, determines, and analyzes different propagation pathways, there is an evident boost in the network’s perception. This novel approach showcases a significant performance upward trend, especially when employing convolutional neural networks.
Chapter 6 of the research focuses on the complexities of HAR in crowded environments. The study introduces the challenges of differentiating the activities of walking versus standing idle for multiple individuals simultaneously. The investigation shows initial promising results by using synthetic data generated from experimental recordings, by employing a regressionbased approach and leveraging diverse techniques such as LSTM, CNN, SVM, and linear regression.
In conclusion, the research offers a reflective glance at the breakthroughs achieved in the domain of radarbased HAR in Chapter 7. The significant contributions and advancements of the study are highlighted. Looking ahead, the chapter identifies research areas for exploration and further improvement.","radar signal processing; ultra wideband radar; radar sensor network; distributed radar; human activity recognition; microDoppler signatures; deep learning","en","doctoral thesis","","978-94-6366-769-2","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:a3070931-7512-44fa-833e-4fdc9e33da4a","http://resolver.tudelft.nl/uuid:a3070931-7512-44fa-833e-4fdc9e33da4a","AI-Assisted Design & Optimization for Predictive Maintenance: A Case Study using Deep Learning and Search Metaheuristics for Structural Health Monitoring in Aviation","Ewald, Vincentius (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Groves, R.M. (promotor); Delft University of Technology (degree granting institution)","2023","One of the classical solutions to maintain the aircraft structural integrity is to rely on the analysis of non-destructive testing (NDT) inspector with various inspection methods. However, it is relatively expensive in matter of time and costs to train human resources until the certification is reached. Further, in majority of the cases of aircraft scheduled and unscheduled maintenance, most of the detected damages are far below the damage tolerance limit and therefore are considered as a costly false positive because such inspections generally require additional downtime. Structural Health Monitoring (SHM) tries to reduce the wasteful resources in the maintenance, repair, and overhaul (MRO) industry by signaling such false positives during the maintenance process by becoming an integral part of the structure itself.
On the other hand, there has been an increase in using the artificial intelligence (AI) methodologies such as computational heuristics and machine learning in many areas of human civilization which includes voice and face recognition, languages translation, and automated driving. There has been a lot of interest on implementing AI to assist SHM in maintaining airworthiness while driving the cost down. Nevertheless, the maintenance of airworthiness (such as but not limited to, EASA Part 145/M and FAA CFR Part 21) is a heavily regulated area and are not easily changed.
The current state of the art was captured in the literature review. This includes recent developments of guided wave based SHM and the parameter optimization as well as recent trends and advances in artificial intelligence such as machine and deep learning. The findings from the state of the art were used as the basis to determine the research problem and to propose the solution.
The first part of the proposed solution consisted of a short review the damage growth assumption within the damage tolerance framework and the used methodology to generate and capture Lamb wave signal within Finite Element (FE) environment. This methodology is a deterministic solution that can be partially used for solving continuous optimization in deterministic sensor placement problem. It was further expanded to include a semi-stochastic approach to address nonpredictable damage location that includes some metaheuristics search such as genetic algorithm and swarm intelligence. The ultimate first part of solution was a compromise between the deterministic and semi-stochastic actuator-sensor topology.
The second part of the proposed solution was the investigation on whether deep learning can be used to treat the Lamb wave signal given the configuration obtained from the first part of the proposed solution. To do so, an assumption based on converging probability measures and generalization bound in deep learning must be taken. Then, the approach is to represent the entity of the captured Lamb wave signal in time-frequency domain either as randomly sampled spectrogram or layers of joined spectrograms. After the training, the hypothesis was validated with A/B Testing.
Then, the research was expanded to understand the scalability level of deep learning for SHM for given data size, model parameters, and restriction on physical memory. In this sense, the signal representations were trained sequentially with an example of in hybrid convolutional recurrent network. The investigation was focused on stability behavior of convoluted-recurrent modelling for variable spectrogram length and the experimental validation of the model for classification of the Lamb wave spectrogram signals.
This research presents a theoretical framework that examines the factors influencing consensus-building on heritage values and attributes. Based on this framework, a public participation methodology empowered by AI is developed and tested in the case study of windcatchers in Yazd, Iran. This study compares the perceptions of three stakeholder groups: experts, policymakers, and users. The findings reveal consensus on the value of windcatchers while highlighting differing interpretations of their significance.
The AI-empowered methodology proves effective in uncovering stakeholder groups' understanding of cultural significance. This framework can be replicated in other case studies, facilitating participatory heritage practices. The thesis contributes to knowledge in public participation, cultural significance, and AI in heritage planning, offering insights for practitioners and policymakers to promote inclusive heritage practices. It emphasizes the importance of stakeholders' contributions and advocates for a more diverse and inclusive approach to heritage planning.
the fundamental mechanisms underlying these systems. As such, network science has become a highly active and dynamic field, driving the development of new theoretical frameworks, computational tools, and empirical methods that continuously push the boundaries of knowledge and understanding in numerous science and engineering domains.
The first part of this thesis centres on the structural properties of complex networks and their practical applications. We demonstrate that the orthogonal eigenvectors of the adjacency matrix of a simple, unweighted, and undirected graph are sufficient to recover that graph, albeit potentially not in a unique manner (Chapter 2). This observation led us to uncover co-eigenvector graphs, which are graphs that share the same eigenvectors while having distinct eigenvalues. Co-eigenvector graphs are the dual counterparts of cospectral graphs, which share identical eigenvalues but possess distinct eigenvectors.
In an unweighted graph, the number of walks between node pairs of a particular length can be expressed in terms of the corresponding power of the adjacency matrix. However, deriving a similar solution for the number of paths is significantly more intricate (Chapter 3). We present three distinct analytical solutions in matrix form for computing the number of paths of any length between node pairs, utilising different types of walks and leveraging principles from the mathematical field of combinatorics. The computational complexity of these solutions varies depending on the sparsity of the graph. The effective resistance metric, which characterises the entire network as perceived from the
vantage point of two given nodes, represents a powerful tool for addressing a wide range of challenges in network theory. In Chapter 4, we leverage the information contained in effective resistance to solve the inverse all shortest path problem, wherein a weighted graph satisfying given upper bounds on the shortest path weights between node pairs is sought, with sparsity being a critical consideration. Additionally, we propose a novel graph sparsification algorithm that selectively removes links from an unweighted graph in a stepwise manner, with the goal of either minimising or maximising the effective resistance
of the resultant graph.
The second part of this thesis pertains to linear processes on complex networks, exploring their properties and applications. Our research reveals that a simple process of attraction and repulsion between adjacent nodes on a one-dimensional line, based on the similarity of their neighbourhoods, can effectively group together nodes from the same community (Chapter 5). Our linear clustering process generally produces more accurate partitions than the most prevalent modularity-based clustering methods in the literature, requiring a comparable amount of computational complexity. An empirical part of our research on processes in complex networks became possible thanks to
our network construction based on a unique data set containing each municipality’s area, population and its geographically adjacent neighbouring municipalities. Thanks to this network construction, research became possible on a dynamic network of connected municipal nodes at a national level over the period from 1830 to 2019 (Chapter 6). By connecting the population data, area data and municipal merger data of all Dutch municipalities, we discovered that the logarithm of the municipal area and population size yields an almost linear difference equation over time. Research into the municipal merger process over the period 1830-2019 has shown that 873 of the 1228 Dutch municipalities
have merged into adjacent larger municipalities with a larger population.
Our simulation of municipality mergers based on network effects caused by population growth by municipality resulted in a county-level predictive accuracy of 91.7 % over a 200-year period. Suppose every node within a network exhibits linear internal dynamics of a specific order, and the dynamic interactions between these nodes are also linear. In that case, the entire network conforms to a collection of linear differential equations (Chapter 7). Our study offers an analytical solution for the comprehensive network dynamics in state space form, achieved by merging the fundamental topology and internal linear dynamics of every individual node.","paths; networked systems; graph spectra; effective resistance; inverse shortest path; graph sparsification; clustering; linear process","en","doctoral thesis","","978-94-6473-200-9","","","","","","2023-09-14","","","Network Architectures and Services","","",""
"uuid:c53da6a5-948a-490d-9061-1f650f7a6125","http://resolver.tudelft.nl/uuid:c53da6a5-948a-490d-9061-1f650f7a6125","Approximations and transformations of piecewise deterministic Monte Carlo algorithms","Bertazzi, A. (TU Delft Statistics)","Jongbloed, G. (promotor); Bierkens, G.N.J.C. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis studies methods to improve the applicability and the performance of Markov Chain Monte Carlo (MCMC) algorithms based on Piecewise Deterministic Markov processes (PDMPs). First, we discuss the key ideas that lay the foundations of the field of MCMC, spanning from the Metropolis-Hastings algorithm to PDMC methods, emphasising a common structure underlying most non-reversible MCMC algorithms studied in the literature. The rest of the thesis is divided in two parts, respectively treating approximations and transformations of PDMC algorithms.
In the first part we introduce several discretisation schemes that approximate a given PDMP and study the properties of the proposed algorithms in detail. This area is of fundamental importance to make PDMPs widely applicable, as indeed the PDMPs considered in the MCMC literature typically cannot be simulated exactly because of either complicated deterministic dynamics or because the random event times are distributed according to an exponential distribution with non-homogeneous rate. In the latter case, existing approaches to simulate the random event times are applicable exclusively when the rate is of simple form, a requirement that covers only toy models from the MCMC literature. In this thesis we introduce and study a wide variety of time discretisations of PDMPs of any order of accuracy, which can now be used as a basis for MCMC algorithms. We study two types of discretisations: the first kind is obtained generalising the principle behind classical Euler schemes, while the second is based on splitting schemes.
In both settings, we establish the dependence of the error on the step size of the discretisation. For suitable Euler schemes we prove uniform in time estimates on the weak error, a particularly challenging result which gives that the error is fully controlled by the step size and does not depend on the time horizon. Moreover, for approximations of PDMPs obtained with Euler-based schemes we obtain error bounds in Wasserstein and total variation distance using the coupling approach.
For our approximations based on splitting schemes we mainly focus on the Zig-Zag sampler (ZZS) and Bouncy Particle Sampler (BPS) and study the best splitting scheme in terms of bias in the invariant measure. For both samplers we obtain conditions ensuring existence and uniqueness of a stationary distribution for the approximation process, as well as exponential convergence to such a distribution. Importantly, we show that symmetric splitting schemes are of second order, although they only require one computation of the gradient of the negative log-likelihood per iteration. Another important novelty we introduce is the possibility to correct the introduced bias via a skew-reversible Metropolis-Hastings acceptance-rejection step. This allows us to design the first unbiased, PDMP-based MCMC algorithms that can be applied effortlessly to sample from any target probability distribution. Our numerical experiments show that the remarkable properties of PDMPs give their approximations excellent convergence properties improving over benchmark methods such as Hamiltonian Monte Carlo and the unadjusted Langevin algorithm.
The second part of the thesis concerns transformations of PDMPs. First, we discuss space transformations of PDMPs, in which case the main goal is to improve the performance of PDMC algorithms when the target distribution $\pi$ is anisotropic. Our proposal is to design PDMC algorithms that learn adaptively the covariance structure of $\pi$ and use this information to tune the velocity of the underlying PDMP, i.e. the directions that the PDMP is more likely to explore. Finding a good set of directions requires knowledge of the target $\pi$, and hence information on previous positions of the process needs to be used. In a similar fashion, we introduce adaptive PDMC algorithms which automatically tune the refreshment rate of the process, i.e. the frequency at which the current velocity vector is replaced with an independent draw from a suitable distribution. For these algorithms we carefully study the convergence to the target distribution by establishing ergodicity, which is challenging for such non-homogeneous Markov processes. Moreover, we test our algorithms on some benchmark examples, on which we observe relevant improvements over the standard, non-adaptive samplers.
In the last chapter of the thesis we consider time transformations of (piecewise deterministic) Markov processes, with an emphasis on improving the convergence of MCMC algorithms. In particular, we study the effect on the properties of a Markov process of a change of the speed of time, where importantly changes in speed depend on the state of the process. This notion can prove helpful in the context of multimodal target distributions, in which case we argue that communication between different modes can be improved by increasing the speed of time when the process is located in low density regions. We connect various properties of a process to those of a related time-changed process, such as a connection between the stationary distributions, the generators, non-explosivity, ergodicity and rate of convergence to the limiting distribution. For PDMPs we show that suitable time transformations can make a geometrically ergodic Markov process uniformly ergodic, a remarkable property which means that the initialisation of the process does not affect the speed of convergence. We apply our theorem to time transformations of the Zig-Zag process, demonstrating the applicability of our conditions. By applying this framework to PDMPs we define several novel processes which have dynamics depending on a user-chosen, interpretable speed function.","MCMC algorithms; non-reversibility; Piecewise deterministic Markov processes; Bayesian statistics; computational statistics","en","doctoral thesis","","","","","","","","","","","Statistics","","",""
"uuid:0f61f871-7c9c-47fc-a542-10883fb2d4de","http://resolver.tudelft.nl/uuid:0f61f871-7c9c-47fc-a542-10883fb2d4de","Miniature sensorized platform for engineered heart tissues","Dostanic, M. (TU Delft Microelectronics; TU Delft Electronic Components, Technology and Materials)","Sarro, Pasqualina M (promotor); Mastrangeli, Massimo (copromotor); Delft University of Technology (degree granting institution)","2023","The high death toll of cardiovascular diseases worldwide and the lack of effective treatments for them are the main motivation for developing alternative and more efficient models for cardiac drug development and disease research. The missing link between current laboratory research on static in vitro and animal models and the clinical stage research on human patients could be created using the rapidly emerging Organ-on-Chip (OoC) technology. Themicrophysiological models developed within OoC research combine devices made of biocompatible, soft materials and human-origin organ-specific cell types, which are then exposed to flow, chemical, electrical or biomechanical stimuli.
Modeling a human cardiac in vivo environment in an artificial model represents quite a challenge from several aspects. First, cardiac tissue in vivo is exposed to a strong coupling between different biomechanical and electrical stimuli that need to be faithfully captured by an in vitro model. Furthermore, such an in vitro model should recapitulate the complexity of cell-cell and cell-extracellular matrix (ECM) interactions between different cardiac cell types, while obtaining physiologically relevant responses. This thesis addresses the first challenge, in an attempt to engineer a dynamic, artificial microenvironment, suitable for the growth, monitoring, and stimulation of hiPSC-based engineered cardiac tissues (EHTs).....","Engineered heart tissue; Heart-on-chip; Organ-on-chip; Microfabrication; Polymer processing","en","doctoral thesis","","","","","","","","","","Microelectronics","Electronic Components, Technology and Materials","","",""
"uuid:ff29de25-2bc6-47a2-813c-102ccb663316","http://resolver.tudelft.nl/uuid:ff29de25-2bc6-47a2-813c-102ccb663316","Effect of biphasic system constituents on liquid-liquid extraction of 5-hydroxymethylfurfural","Altway, S. (TU Delft ChemE/Transport Phenomena)","de Haan, A.B. (promotor); Delft University of Technology (degree granting institution)","2023","HMF (5-hydroxymethylfurfural) is one of the bio renewable materials that can be used as an important platform chemical to produce biofuel and various chemical products. The main application of HMF in the chemical industry is a platform chemical for the production of plant-based polyethylene terephthalate (PET). HMF is produced through hexose dehydration which fructose or glucose is arranged as a feedstock. Liquid-liquid extraction can be applied in HMF production to enhance the selectivity and yield of HMF. HMF can be extracted from aqueous solution into the organic phase which prevents the degradation of HMF. Furthermore, it has been recognized that ionic liquid (IL) and deep eutectic solvent (DES) can be used as stabilizing agent in HMF production by suppressing the formation of side-products, hence increase the HMF yield as well. However, research on the systematic thermodynamics of HMF extraction is quite limited and needed to be developed. The thermodynamic data, such as phase equilibrium data and partitioning of HMF into organic phase are needed as basis for a rational design and optimal separation of HMF from the aqueous solution.
The objective of this research is systematically study the effect of biphasic system constituents on the liquid-liquid extraction of HMF at 313.15 K and atmospheric pressure (0.1 MPa). The extraction performance was evaluated based on the values of separation factor and HMF distribution coefficient which were determined from liquid-liquid equilibrium (LLE) data. The experimental LLE data of the investigated systems were also correlated well using thermodynamics models. The NRTL and UNIQUAC models were used to correlate the ternary experimental LLE data, whilst the experimental LLE data containing salt, IL, DES, and sugar were correlated using the NRTL model. We used aqueous-organic biphasic systems, and also added IL [EMIM][BF4] (1-ethyl-3-methylimidazolium tetrafluoroborate) or DES ChCl-urea (choline chloride-urea) in the aqueous phase. The effect of the addition of sugar (fructose) and salt in the variety of cation (Na+, K+) and anion (Cl-, SO4 2-) were also studied. Three different extraction solvents, methyl isobutyl ketone (MIBK), 2-pentanol, and tributyl phosphate (TBP), were used for the comparison.
According to the results in this study, it indicated that for 2-pentanol the HMF distribution coefficient is up to 1.4 times higher than MIBK. Besides, MIBK has a 2-3 times higher separation factor than 2-pentanol. While TBP is more selective as extraction solvent than the other two solvents, TBP is also superior in terms of HMF distribution coefficient. The salting-out strength of salts for organic solvent (MIBK or 2-pentanol)-HMF-water-salt systems are in the order NaCl > Na2SO4 > KCl > K2SO4. NaCl was found superior in both separation factor and distribution coefficient of HMF compared to the other salts studied. Furthermore, the separation factor and HMF distribution coefficient decreased with the increase of IL [EMIM][BF4] and DES (ChCl-urea) concentrations. However, DES (ChCl-urea) decreased the extraction performance less than IL [EMIM][BF4]. The addition of salt (NaCl) enhanced the separation factor and the distribution coefficient of HMF, enabling compensation of the IL and DES effects. The presence of salt can enhance both the extraction performance parameters up to 2-4 times for all the investigated systems studied using three different organic
solvents and also in the presence of IL or DES. While, the presence of fructose in the solution had limited effect on the extraction performance. In general, it can be inferred that by taking the advantage of IL/DES as stabilizing agent, aqueous IL/DES with NaCl is a good combination applied in HMF extraction process to achieve good extraction performance.","Extraction performance; 5-Hydroxymethylfurfural; Liquid-liquid equilibria; Separation process; Thermodynamics model","en","doctoral thesis","","978-94-93330-03-0","","","","","","2023-04-14","","","ChemE/Transport Phenomena","","",""
"uuid:b31521e3-0d1b-4df0-a6d9-12f24f0a4a6e","http://resolver.tudelft.nl/uuid:b31521e3-0d1b-4df0-a6d9-12f24f0a4a6e","Liquid Territories: Configurations of geographic space in the cartographic projections of the Mekong River’s catchment areas","Romanos, C. (TU Delft Theory, Territories & Transitions)","Schoonderbeek, M.G.H. (promotor); van der Velde, J.R.T. (copromotor); Delft University of Technology (degree granting institution)","2023","The role played by the Mekong River in the organization of land and people is inextricably linked with a particular spatial category. The concept of the hydrological catchment extends the space of the river far beyond the limits of the river’s perennial waterbodies, to encompass vast areas inhabited by millions of people speaking different languages. Fundamental to the estimation of precipitation and water volume, areal denotations of the Mekong’s basin, delta and floodplain have been repeatedly drawn on maps by geographers, planners, engineers and cartographers. Mapped representations of the Mekong River however are not only the result of recording the flows of water, nor the domain of a single discourse. With diverging intentions, distinct and sometimes conflicting projections of the basin, delta and floodplain have prescribed the differentiation and unification of parts of mainland Southeast Asia, to articulate liquid territories that are outside a single state’s jurisdiction. As a result, the mapped articulation of surface water is reflected in the configuration of national boundaries and the arrangement of settlements. To understand how the Mekong’s catchments emerge as the geographic reference for human activities, the dissertation examines the technical and cultural notions that underpin the preparation of these maps. Drawing on the discourses of hydrology, geography, cartography as well as infrastructure design, military science, colonial politics and regional planning the research asks what territories are produced and maintained by evoking the geography of the river’s flows.","Mekong River; Mekong basin; Mekong delta; floodplain; regional planning; cartography; catchment hydrology; territory; water infrastructure planning; Settlement development; maps; geographic representation; geography; urbanization processes; hydrosocial territories; urban planning; territorial design","en","doctoral thesis","","","","","","","","","","","Theory, Territories & Transitions","","",""
"uuid:66f0c152-65a0-45bc-b542-ba9799d6a0c1","http://resolver.tudelft.nl/uuid:66f0c152-65a0-45bc-b542-ba9799d6a0c1","The Circle of DL-SCA: Improving Deep Learning-based Side-channel Analysis","Wu, L. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Picek, S. (copromotor); Delft University of Technology (degree granting institution)","2023","For almost three decades, side-channel analysis has represented a realistic and severe threat to embedded devices' security. As a well-known and influential class of implementation attacks, side-channel analysis has been applied against cryptographic implementations, processors, communication systems, and, more recently, machine learning models. Two reasons make these attacks powerful. First, they take advantage of unintended information leakages that the security designer could easily forget. These leakages can be conveyed from various sources, such as power consumption, electromagnetic emanations, time, temperature, and acoustic and photonic emissions. Protection from such leakages can be challenging and costly. Second, such attacks do not require complicated and expensive equipment or frameworks. Commonly, an adversary uses an oscilloscope to monitor some of those side-channel leakages, then performs statistical analysis to find the relation between the leakages and the actual executed values, and finally uses these relations to recover secret information.
Fortunately, hardware and software developers are prepared for these attack methods. Several protection mechanisms, also called side-channel countermeasures, have been implemented to increase the security assurance of their devices. However, this cat-and-mouse game is now changed because of the rising of artificial intelligence in side-channel analysis. Some countermeasures, resilient to conventional methods, can be easily bypassed by machine learning. This thesis aims to improve the capability of side-channel analysis using deep learning techniques. Specifically, we propose approaches covering complete deep learning-based side-channel analysis procedures (we denote them as ""The Circle of DL-SCA""). Before applying the leakages to launch actual attacks, in chapter 2, we offer strategies for improving leakage's ''quality'' from various aspects. Then, in chapter 3, the study focuses on critical deep learning hyperparameters and proposes two automated neural architecture search methods that release the burden of the evaluation in tuning the neural network.
Besides developing new attack strategies, we also focus on the existing attack methods and investigate how to enhance their efficiency, robustness, and explainability. Chapter 4 introduces an efficient learning scheme that can reduce the required training traces. Then, we develop an attack evaluation metric that can reliably reflect the performance and robustness of the model. In chapter 5, we create a novel methodology to evaluate the influence of noise and countermeasures on deep-learning models, then apply the research outcomes to design low-cost deep-learning resilient countermeasures. Our research outcomes will push the designers to develop more secure devices. The feed-forward loop between us (researchers) and designers can eventually make the electronic world more secure.","Side-channel analysis; Deep learning (DL); Pre-processing; Hyperparameter tuning; Metric; Countermeasures","en","doctoral thesis","","9789464730678","","","","","","","","","Cyber Security","","",""
"uuid:1a678f17-c9c5-46c7-aca2-0dea1f00d1fa","http://resolver.tudelft.nl/uuid:1a678f17-c9c5-46c7-aca2-0dea1f00d1fa","Phase-Coded FMCW for Automotive Radars","Kumbul, U. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Silveira Vaucher, C. (promotor); Petrov, N. (copromotor); Delft University of Technology (degree granting institution)","2023","Autonomous driving is a new emerging technology that will enhance traffic safety. Automotive radars are essential to attaining autonomous driving since they can function in adverse weather conditions and are used for detection, tracking, and classification in traffic settings. However, the dramatic growth in the number of radar sensors used for automotive radars has raised concerns about spectral congestion and the coexistence of radar sensors. The mutual interference between multiple radar sensors downgrades the sensing performance of automotive radar and needs to be mitigated. Moreover, automotive radars have limited processing power, preventing them from using computationally heavy techniques to countermeasure interference. This thesis aims at developing, evaluating and verifying a robust waveform with required processing steps suitable for automotive radars to boost the coexistence of multiple radar sensors. To achieve this task, phase-coded frequency modulated continuous wave (PC-FMCW) and necessary processing steps are studied.
The first step is taken by investigating the sensing properties of the PC-FMCW waveforms and possible receiver strategies in Chapter 2. It is demonstrated that the ambiguity function of the code is sheared after frequency modulation. Moreover, different binary phase codes are examined with the PC-FMCW waveforms, and their sensing performance is compared in terms of integrated sidelobe level. Subsequently, two receiver approaches based on the dechirping process to decrease the sampling demands of the PC-FMCW waveforms are examined. The sensing performance of the investigated receiver approaches is compared, and the trade-offs between the sensing performance and the code bandwidth are analyzed. Moreover, the PC-FMCW waveform is applied to a real scenario, and the sensing performance of the investigated receiver structures is validated experimentally.
Chapter 3 investigates the beat signal spectrum widening due to coding and explores the smoothed phase-coded frequency modulated continuous wave (SPC-FMCW) to improve the sensing performance in the limited receiver analogue bandwidth. The abrupt phase changes seen in binary phase-coded signal is analyzed, and a phase smoothing operation to reduce the spectral broadening of the coded beat signals is proposed. The introduced SPC-FMCW waveforms are analyzed in different domains and compared with the binary phase coding. It is shown that the proposed smoothing operation decreases the spectral broadening of the coded beat signal and improves the sensing performance of the waveform.
In Chapter 4, the limitation in the group delay filter receiver approach is investigated, and the appropriate receiver strategy with low computational complexity is designed to process the PC-FMCW waveforms. The impact of the group delay filter on the coded beat signal is examined in detail, and a phase lag compensation is proposed to enhance decoding performance. It is demonstrated that performing phase lag compensation on the transmitted code eliminates the undesired effects of the group delay filter, and the beat signal is recovered properly after decoding. Then, the properties of the resulting waveforms are theoretically examined, and the sensing performance improvement over the existing approach is demonstrated. Moreover, both sensing and cross-isolation performance of the introduced waveforms with proposed processing steps are validated experimentally.
Chapter 5 studies the PC-FMCW waveforms for a coherent multiple-input-multiple-output (MIMO) radar. To this end, the MIMO ambiguity functions of the PC-FMCW waveform with different code families are investigated for their separation capability and compared with the PMCW waveform. It is illustrated that the PC-FMCW ambiguity function outperforms the PMCW one in terms of range resolution, Doppler tolerance, and sidelobe level for the identical types of codes. Afterwards, the developed phase lag compensated waveform with a single transmitter-receiver approach is performed to a coherent MIMO radar, and a novel PC-FMCW MIMO structure is proposed in Chapter 5. The introduced MIMO structure jointly utilizes phase coding in both fast-time and slow-time to achieve low sidelobe levels in the range-Doppler-azimuth domains while maintaining high range resolution, unambiguous velocity, good Doppler tolerance and low sampling requirements. The sensing performance of the introduced MIMO structure is evaluated and compared with the state-of-the-art techniques. Moreover, the proposed MIMO structure's practical limitations are investigated and demonstrated. In addition, the sensing performance of the developed approach with the simultaneous transmission is verified experimentally.
Finally, the interference resilience and communication capabilities of the developed PC-FMCW radar have been studied in Chapter 6. First, the automotive radar interference problem between various types of continuous waveforms is examined. The interference analysis formulation is extended to PC-FMCW waveforms, and a generalised radar-to-radar interference equation is proposed. The introduced equation can be utilised to quickly and accurately derive the numerous interference scenarios discussed in the literature. In addition, the proposed equation's validity to characterise the victim radar's time-frequency distribution is demonstrated experimentally using the commercially available off-the-shelf automotive radar transceivers. Afterwards, the robustness of the developed PC-FMCW radar against different types of FMCW interference cases is examined, and an improvement in the sensing performance over the conventional FMCW waveform is demonstrated. Moreover, the communication performance of the PC-FMCW with dechirping receivers is compared, and the trade-off between the bit error rate and the code bandwidth is investigated.
This thesis shows that the developed PC-FMCW radar structure can provide high mutual orthogonality to enhance the functioning of multiple radars within the same frequency bandwidth while sustaining the low sampling demand and good sensing performance. Consequently, the introduced approach can be effectively utilized by automotive radars to mitigate mutual interference between multiple radar sensors and improve the sensing performance of simultaneous MIMO transmission. Although the focus is on the application in an automotive radar context, the developed approach can also be used in other radar fields.","Automotive Radar; Phase-Coded Chirps; Interference Mitigation; MIMO Radar; Mutual Orthogonality; Radar Signal Processing","en","doctoral thesis","","978-94-6384-420-8","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:bf83b94a-4438-47c7-bfca-7a93334d79e4","http://resolver.tudelft.nl/uuid:bf83b94a-4438-47c7-bfca-7a93334d79e4","Seismic-interferometric applications for near-surface and mineral exploration","Balestrini, F.I. (TU Delft Applied Geophysics and Petrophysics)","Draganov, D.S. (promotor); Ghose, R. (promotor); Delft University of Technology (degree granting institution)","2023","Seismic methods are widely used for the exploration of the Earth’s subsurface. While they allow higher resolution compared to other geophysical methods, their performance depends on site and geological characteristics, and the volume and type of recorded information. Additionally, data processing plays a critical role in the efficacy of the application of seismic methods.
A common challenge when utilising seismic methods arises as a result of field restrictions and cost constraints. As a consequence, seismic data often suffer from irregular or sparse spatial sampling, which can affect the application of advanced processing and imaging algorithms, for instance, surface-related multiple elimination and wave equation migration. These algorithms require dense and regular sampling to provide reliable results. Thus, seismic-data regularisation and interpolation are commonly utilised processing steps. Nevertheless, the interpolation of data for relatively large gaps is not trivial, in particular for land data acquired in complex geological settings where the seismic events exhibit pronounced curvature and lack of continuity....","Seismic interferometry; seismic data processing; data reconstruction","en","doctoral thesis","","978-94-6366-671-8","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:afad2560-3e7b-4b43-9997-17b4e24e9e02","http://resolver.tudelft.nl/uuid:afad2560-3e7b-4b43-9997-17b4e24e9e02","Morphodynamic equilibria in double–inlet systems: Their existence, multiplicity and stability","Deng, X. (TU Delft Mathematical Physics)","Schuttelaars, H.M. (promotor); De Mulder, T, (promotor); Delft University of Technology (degree granting institution)","2023","Tidal inlet systems, which consist of back–barrier basins connected to the open sea by one or multiple inlets, are found at many places along sandy coasts. They are valuable for ecology (breeding and feeding areas), economy (gas–mining and sand–mining) and recreation, and are important for coastal safety. But they are also sensitive to external forcings like prevailing currents, tides, winds, sea level rise and human interferences. Therefore, it is important to investigate the morphodynamic behaviour of these tidal inlet systems, especially the formation of the channels and shoals. In this thesis, idealized models will be developed to study so–called double–inlet systems, which are tidal basins with two inlets connecting to the open sea. To assess the morphodynamic behaviour of double–inlet systems, a one–dimensional idealized model is developed. In this model, the water motion is governed by cross– sectionally averaged shallow water equations, forced by tides prescribed at the seaward boundaries. Sediment transport is governed by a width–averaged and depth–integrated advection diffusion equation,with sink and source terms. The bed evolution is described by the cross–sectionally averaged equation for the concentration of mass in a sediment layer. A system is said to be in morphodynamic equilibrium if the bed does not evolve on a long (morphodynamic) timescale anymore. The model is first analysed without the presence of externally prescribed overtides, so the water motion is only forced by theM2 tidal constituents. To systematically analyse the sensitivity of the resulting morphodynamic equilibria to the characteristics of theM2 forcing, a continuation approach is employed to obtain these equilibria in the parameter space spanned by the relative phase and amplitude of the M2 tidal constituent. In this parameter space, it was found that there are regions where no morphodynamic equilibrium, one equilibrium or multiple equilibria can exist. When there is no morphodynamic equilibrium, the double–inlet system is reduced to two single–inlet systems. For a certain parameter setting, four morphodynamic equilibria are found. The water depth of these four equilibria are further analysed, as well as the sediment transport contributions. The influence of the depth variations, the presence of externally generated overtides and width variations of this model are then further analysed for the stable morphodynamic equilibria. The model finally allows a qualitative comparison with observations in the Marsdiep–Vlie inlet system at the Dutch Wadden Sea. Using characteristic values of this system, one stable equilibria is obtained, suggesting that this double–inlet system can be stable on the long morphodynamic timescales. Next, the morphodynamic model is extended to include dynamics in the lateral direction. The model consists of depth–averaged shallow water equations neglecting the effects of earth rotation, a depth–integrated concentration equation and a tidally–averaged bottom evolution equation. Since the equations are still averaged over depth, a 2DHmodel is obtained. With this idealized model the initial formation of channel–shoal patterns in a double–inlet system with a rectangular geometry was systematically investigated. Utilizing infinitesimally small perturbations with a lateral structure, the initial formation of channels and shoals can be expected if the laterally uniform morphodynamic equilibria are linearly unstable with respect to these perturbations. When the water motion is only forced by an M2 tidal constituent, restricting only attention to that part of the parameter space spanned by the relative phase and amplitudes of M2 tidal forcing where laterally uniform morphodynamic equilibria exist, it is found that these equilibria can be either stable against two–dimensional perturbations, or linearly unstable. When linearly unstable, the instabilities can be either due to diffusive mechanisms, or due to advective mechanisms. When the morphodynamic equilibria become unstable due to diffusive processes, the classical diffusive mechanism has a destabilizing effect, while the topographically induced diffusive mechanism has a stabilizing effect. The associated eigenvalues are all real, implying an exponential growth/decay in time. When the advective mechanism results in linear instabilities, the eigenvalues are complex, implying that bedforms do not only grow/decay in time, but also migrate. When external overtides and a residual discharge are included, the laterally uniform morphodynamic equilibria can be unstable due to the convergences and divergences of both (interally and externally) advective and diffusive transport. Finally, we study channels and shoals in double–inlet systems, using a scaled depth– averaged model. Thismodel consists of scaled shallow water motion equations, a scaled depth–integrated concentration equation and a scaled bottom evolution equation. By focusing on a short rectangular tidal basin, laterally uniform morphodynamic equilibria can be found. These equilibria are either linearly stable or linearly unstable due to diffusive processes. When varying one or more parameters, such as the friction parameter and the width of the system, bifurcations can be found where the stabilities of morphodynamic equilibria change. Using associated eigenfunctions as a load vector, arclength method allows to switch branches. At different branches, morphodynamic equilibria are characterized by lateral variations with different mode numbers. When default parameters are used, the resulting bifurcation diagrams reveal thatmultiple morphodynamic equilibria exist.","Tidal embayment; Process–based model; Idealized model; Double– inlet systems; Morphodynamic equilibria; Bifurcations","en","doctoral thesis","","978-94-6366-661-9","","","","","","","","","Mathematical Physics","","",""
"uuid:0e03913c-898e-4392-8de5-072a7ead7fd6","http://resolver.tudelft.nl/uuid:0e03913c-898e-4392-8de5-072a7ead7fd6","Optimal Mixing Evolutionary Algorithms for Large-Scale Real-Valued Optimization: Including Real-World Medical Applications","Bouter, P.A. (TU Delft Algorithmics; Centrum Wiskunde & Informatica (CWI))","Bosman, P.A.N. (promotor); Alderliesten, T. (copromotor); Delft University of Technology (degree granting institution)","2023","In recent years, the use of Artificial Intelligence (AI) has become prevalent in a large number of societally relevant, real-world problems, e.g., in the domains of engineering and health care. The field of Evolutionary Computation (EC) can be considered to be a sub-field of AI, concerning optimization using Evolutionary Algorithms (EAs), which are population-based (meta-)heuristics that employ the Darwinian principles of evolution, i.e., variation and selection. Such EAs are historically mainly considered for the optimization of difficult, non-linear problems in a Black-Box Optimization (BBO) setting, because EAs can effectively optimize such problems even when very little is known about the optimization problem and its structure. This is in contrast to optimization methods that are specifically designed for certain problems of which the definition and structure are known, i.e., a White-Box Optimization (WBO) setting.","Evolutionary Algorithms; Gene-pool Optimal Mixing; Gray-box optimization; Large-scale optimization; Real-valued optimization; Multi-objective Optimisation; Graphics Processing Unit (GPU); CUDA; Brachytherapy; Treatment planning; Deformable image registration","en","doctoral thesis","","978-94-6366-648-0","","","","","","","","","Algorithmics","","",""
"uuid:6de8ebab-7dc1-4a81-bac7-323e53db9592","http://resolver.tudelft.nl/uuid:6de8ebab-7dc1-4a81-bac7-323e53db9592","Training Generative Adversarial Networks via Stochastic Nash Games","Franci, B. (TU Delft Team Sergio Grammatico); Grammatico, S. (TU Delft Team Sergio Grammatico; TU Delft Team Bart De Schutter)","","2023","Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: a generator and a discriminator. These two neural networks compete against each other through an adversarial process that can be modeled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this article, we propose a stochastic relaxed forward-backward (SRFB) algorithm for GANs, and we show convergence to an exact solution when an increasing number of data is available. We also show convergence of an averaged variant of the SRFB algorithm to a neighborhood of the solution when only a few samples are available. In both cases, convergence is guaranteed when the pseudogradient mapping of the game is monotone. This assumption is among the weakest known in the literature. Moreover, we apply our algorithm to the image generation problem.","Convergence; Games; Generative adversarial networks; Generative adversarial networks (GANs); Generators; Neural networks; stochastic Nash equilibrium (SNE) problems (SNEPs); Stochastic processes; Training; two-player game; variational inequalities.","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-02-26","","","Team Sergio Grammatico","","",""
"uuid:1fe808ba-a627-4bc9-aaea-03f90883e5aa","http://resolver.tudelft.nl/uuid:1fe808ba-a627-4bc9-aaea-03f90883e5aa","Compatibility Assessment of Multistatic/Polarimetric Clutter Data with the SIRP Model","Aubry, Augusto (Università degli Studi di Napoli Federico II); Carotenuto, Vincenzo (Università degli Studi di Napoli Federico II); De Maio, Antonio (Università degli Studi di Napoli Federico II); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2023","This article deals with the statistical inference of simultaneously recorded co- and cross-polarized bistatic coherent sea-clutter returns at S-band. This study is conducted employing appropriate statistical learning tools, involving the complex envelope of data, to assess the compliance of the available measurements with the spherically invariant random process (SIRP) representation, as well as to analyze possible texture correlations among the diverse polarimetric channels. Moreover, the spatial heterogeneity of the sea-clutter data is studied. The results highlight that the SIRP model is a good candidate for the representation of bistatic coherent clutter and usually the coherence time of the SIRP texture at the bistatic nodes is longer than that in the monostatic sensing. Notably, at bistatic angles in order of 60°, the quadrature components of the cross-polarized bistatic measurements substantially exhibit a Gaussian behavior. These achievements further shed light on the bistatic sea-clutter diversity from the geometric and polarimetric point of view.","Multistatic/polarimetric radar; Spherically Invariant Random Process (SIRP); Geometry; sea-clutter; coherence time; spatial heterogeneity","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-08-31","","","Microwave Sensing, Signals & Systems","","",""
"uuid:a8368a60-6752-4061-a7ec-3576f4e9e44b","http://resolver.tudelft.nl/uuid:a8368a60-6752-4061-a7ec-3576f4e9e44b","A Low-Spur Fractional-N PLL Based on a Time-Mode Arithmetic Unit","Gao, Z. (TU Delft Electronics); He, J. (TU Delft Electronics); Fritz, Martin (Sony Europe Limited, Germany); Shen, Y. (TU Delft Electronics); Zong, Z. (TU Delft Electronics); Spalink, Gerd (Sony Europe Limited, Germany); Alavi, S.M. (TU Delft Electronics); Staszewski, R.B. (TU Delft Electronics); Babaie, M. (TU Delft Electronics)","","2023","This article introduces a low-jitter low-spur fractional-N phase-locked loop (PLL) adopting a new concept of a time-mode arithmetic unit (TAU) for phase error extraction. The TAU is a time-signal processor that calculates the weighted sum of input time offsets. It processes two inputs - the period of a digitally controlled oscillator (DCO) and the instantaneous time offset between the DCO and reference clock edges - and then extracts the DCO phase error by calculating their weighted sum. The prototype, implemented in 40-nm CMOS, achieves 182-fs rms jitter with 3.5-mW power consumption. In a near-integer channel, it shows the worst fractional spur below -59 dBc. Under considerable supply or temperature variations, the worst spur still remains below -51.7 dBc without any background calibration tracking.","Arithmetic; Capacitors; Clocks; Digital-to-time converter (DTC); fractional spur; Microelectronics; Phase locked loops; phase-locked loop (PLL); process voltage and temperature (PVT); Switches; time-mode arithmetic unit (TAU); Voltage","en","journal article","","","","","","","","","","","Electronics","","",""
"uuid:82defedc-e3ad-4a3f-b0fd-a7c7f6020e02","http://resolver.tudelft.nl/uuid:82defedc-e3ad-4a3f-b0fd-a7c7f6020e02","Circulant Shift-based Beamforming for Secure Communication with Low-resolution Phased Arrays","Patel, Kartik (The University of Texas at Austin); Myers, N.J. (TU Delft Team Nitin Myers); Heath, Robert W. (University of North Carolina)","","2023","Millimeter wave (mmWave) technology can achieve high-speed communication due to the large available spectrum. Furthermore, the use of directional beams in mmWave system provides a natural defense against physical layer security attacks. In practice, however, the beams are imperfect due to mmWave hardware limitations such as the low-resolution of the phase shifters. These imperfections in the beam pattern introduce an energy leakage that can be exploited by an eavesdropper. To defend against such eavesdropping attacks, we propose a directional modulation-based defense technique where the transmitter applies random circulant shifts of a beamformer. We show that the use of random circulant shifts together with appropriate phase adjustment induces (APN) in the directions different from that of the target receiver. Our method corrupts the phase at the eavesdropper without affecting the communication link of the target receiver. We also experimentally verify the APN induced due to circulant shifts, using channel measurements from a 2-bit mmWave phased array testbed. Using simulations, we study the performance of the proposed defense technique against a greedy eavesdropping strategy in a vehicle-to-infrastructure scenario. The proposed technique achieves better defense than the antenna subset modulation, without compromising on the communication link with the target receiver.","Antenna arrays; Array signal processing; Eavesdropping; Millimeter wave communication; Phase shifters; Phased arrays; Symbols","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-06","","","Team Nitin Myers","","",""
"uuid:5b2a7880-d105-49df-9e84-e40bbb942bf6","http://resolver.tudelft.nl/uuid:5b2a7880-d105-49df-9e84-e40bbb942bf6","Self-Calibration of Acoustic Scalar and Vector Sensor Arrays","Ramamohan, Krishnaprasad Nambur (Microflown Technologies, Arnhem); Chepuri, Sundeep Prabhakar (Indian Institute of Science India); Comesana, Daniel Fernandez (Microflown Technologies, Arnhem); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2023","In this work, we consider the self-calibration problem of joint calibration and direction-of-Arrival (DOA) estimation using acoustic sensor arrays. Unlike many previous iterative approaches, we propose solvers that can be readily used for both linear and non-linear arrays for jointly estimating the sensor gain, phase errors, and the source DOAs. We derive these algorithms for both the conventional element-space and covariance data models. We focus on sparse and regular arrays formed using scalar sensors as well as vector sensors. The developed algorithms are obtained by transforming the underlying non-linear calibration model into a linear model, and subsequently by using convex relaxation techniques to estimate the unknown parameters. We also derive identifiability conditions for the existence of a unique solution to the self-calibration problem. To demonstrate the effectiveness of the developed techniques, numerical experiments, and comparisons to the state-of-The-Art methods are provided. Finally, the results from an experiment that was performed in an anechoic chamber using an acoustic vector sensor array are presented to demonstrate the usefulness of the proposed self-calibration techniques.","Acoustics; Calibration; Direction-of-arrival estimation; Manifolds; Measurement uncertainty; Sensor arrays; Signal processing algorithms","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-24","","","Signal Processing Systems","","",""
"uuid:c1530e47-4cb5-4f88-a13f-6a8e8b820f4c","http://resolver.tudelft.nl/uuid:c1530e47-4cb5-4f88-a13f-6a8e8b820f4c","Digital Thread Roadmap for Manufacturing and Health Monitoring the Life Cycle of Composite Aerospace Components","Eskue, N.D. (TU Delft Structural Integrity & Composites)","","2023","This paper provides a detailed review of a digital thread for composite aerospace components. The current state of the digital thread continues to progress and at an ever-accelerating rate due to advancements in supporting technologies such as AI, data capture/processing/storage, sensors, simulation, and blockchain. While the individual steps that make up the digital thread have made manufacturing innovation and benefits possible, the connection points of the thread are not consistently solid, with many experiments and proof-of-concepts being conducted, but with few full digital threads in deployment. Key gaps include the ability to handle such large and continuous amounts of data, the infrastructure needed to capture and process them for insight, and the AI-based analytics to build and scale enough to obtain the expected exponential benefits for life cycle insight and manufacturing optimization. Though some of these gaps may take specific technology innovations to advance, there is a specific roadmap that can be deployed immediately in order to obtain “rolling ROI” benefits that will scale in value as this cycle is repeated across the product line.","digital thread; artificial intelligence; digital twin; process control;; predictive maintenance; factory optimization; composites manufacturing; structural health monitoring; blockchain; life cycle optimization","en","review","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:9b5d8b11-119a-4e5f-a2ee-b525dd23defc","http://resolver.tudelft.nl/uuid:9b5d8b11-119a-4e5f-a2ee-b525dd23defc","Role of physical attributes of preferred building facades on perceived visual complexity: a discrete choice experiment","Hashemi Kashani, S. Mahdi (Golestan University); Pazhouhanfar, Mahdieh (Golestan University); van Oel, C.J. (TU Delft Design & Construction Management)","","2023","Complexity has been known as a crucial psychological factor influencing the evaluation of the building facades preferences. However, little is known about the role of physical attributes of preferred building facades on perceived visual complexity. The objective of this study is to assess perceived visual complexity of urban building facades in terms of physical attribute in different levels. Discrete choice experiments were used to study the perceived visual complexity of preferred building facades. A sample of 213 students from Golestan University evaluated preference and perceived visual complexity of 36 pairs of images based on ten physical attributes of building facades in different levels (material (brick, stone), the contrast of materials (absent, present), color (absent, present), ornament (high, low), curve (straight, curved), vegetation (plants, no plants), windows orientations (vertical, horizontal), fenestration (large, small), articulation (side recesses, flat) and architectural style (modern, classic, traditional). The results revealed that all physical attributes of preferred building facades were found significant on perceived visual complexity expect for three attributes: architectural style, color and window to wall size. Thus, participant preferred a high-ornament facade with curved lines, vegetation, classical style, articulation, contrast between materials, as well as vertical windows. The articulation and ornament attributes were the most significant on perceived visual complexity. The results of this study can help city planners, architects, and designers to design facades with more general preferences and reduce the visual pollution of the cities.","Building facades; Information-processing theory; Multinomial logit model; Visual complexity","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-08-12","","","Design & Construction Management","","",""
"uuid:c9584adb-09b5-4f27-933c-49001a60ef47","http://resolver.tudelft.nl/uuid:c9584adb-09b5-4f27-933c-49001a60ef47","Lower temperature heating integration in the residential building stock:: A review of decision-making parameters for lower-temperature-ready energy renovations","Wahi, P. (TU Delft Environmental & Climate Design); Konstantinou, T. (TU Delft Architectural Technology); Tenpierik, M.J. (TU Delft Environmental & Climate Design); Visscher, H.J. (TU Delft Design & Construction Management)","","2023","Lower temperature heating (LTH) involves using the lowest possible supply temperatures to meet residential heating demands, thus supporting the integration of sustainable heating sources and decarbonising the existing residential stock. However, choosing appropriate energy renovation options to prepare existing dwellings for LTH presents decision-making challenges due to the heterogenous dwelling stock with varying building characteristics, numerous renovation options, and various performance indicators for evaluating trade-offs. This study aims to review the scientific literature on integrating LTH into existing dwellings to identify the building characteristics for evaluating the potential of using LTH and the necessity for renovations, presents a systematic method for organising renovation options and summarises key performance indicators. The study employed the SALSA (search, appraisal, synthesis and analysis) framework for systematic review and identified 24 scientific publications. Findings show that dwelling characteristics such as compactness ratio, thermal insulation, thermal bridges, airtightness, ventilation systems, space heating system capacity and supply temperature level are essential for investigating LTH potential and the need for renovations. Most research lacks qualitative renovation criteria and product-level information for selecting renovation options. Key performance indicators related to energy efficiency, thermal comfort and quality-of-services can help indicate the possible solutions, while those related to environmental and economic performance indicate the feasibility of possible solutions. Nevertheless, there is a lack of standard set of criteria for indicating the dwelling's readiness for using LTH. These findings can help address the decision-making challenges of selecting appropriate renovation strategies to enable the use of LTH and contribute to decarbonising the built environment.","Lower temperature supply; Existing residential stock; Energy transition; Sustainable heating sources; Decision-making process","en","review","","","","","","","","","","","Environmental & Climate Design","","",""
"uuid:c4d42d9b-8ddd-4338-a913-0fc6a5230fa3","http://resolver.tudelft.nl/uuid:c4d42d9b-8ddd-4338-a913-0fc6a5230fa3","Phase-Coded FMCW for Coherent MIMO Radar","Kumbul, U. (TU Delft Microwave Sensing, Signals & Systems); Petrov, N. (TU Delft Microwave Sensing, Signals & Systems; NXP Semiconductors); Silveira Vaucher, C. (TU Delft Electronics; NXP Semiconductors); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","The phase-coded linear-frequency-modulated continuous-wave (PC-FMCW) waveform with a low sampling processing strategy is studied for coherent multiple-input multiple-output (MIMO) radar. The PC-FMCW MIMO structure, which jointly uses both fast-time and slow-time coding, is proposed to reduce sidelobe levels while preserving high range resolution, unambiguous velocity, good Doppler tolerance, and low sampling needs. The sensing performance and practical aspects of the introduced PC-FMCW MIMO structure are evaluated theoretically and verified experimentally. The numerical simulations and experiments demonstrate that the proposed MIMO keeps the advantages of the linear-frequency-modulated continuous-wave (LFMCW) waveform, including computational efficiency and low sampling demands, while having the ability to provide low sidelobe levels with simultaneous transmission.","Linear frequency modulation (LFM); multipleinput multiple-output (MIMO); phase-modulated chirps; radar signal processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Microwave Sensing, Signals & Systems","","",""
"uuid:89a98edb-45e1-4645-bc8b-867b86870914","http://resolver.tudelft.nl/uuid:89a98edb-45e1-4645-bc8b-867b86870914","A Comparative Study of Optimization Models for Condition-Based Maintenance Scheduling of an Aircraft Fleet","Tseremoglou, I. (TU Delft Air Transport & Operations); van Kessel, Paul J. (KLM Royal Dutch Airlines); Santos, Bruno F. (TU Delft Air Transport & Operations)","","2023","Condition-based maintenance (CBM) scheduling of an aircraft fleet in a disruptive environment while considering health prognostics for a set of systems is a very complex combinatorial problem, which is becoming more challenging in light of the uncertainty included in health prognostics. This type of problem falls under the broad category of resource-constrained scheduling problems under uncertainty and is often solved using a mixed integer linear programming (MILP) formulation. While a MILP framework is very promising, the problem size can scale exponentially with the number of considered aircraft and considered tasks, leading to significantly high computational costs. The most recent advances in artificial intelligence have demonstrated the capability of deep reinforcement learning (DRL) algorithms to alleviate this curse of dimensionality, as once the DRL agent is trained, it can achieve real-time optimization of the maintenance schedule. However, there is no guarantee of optimality. These comparative merits of a MILP and a DRL formulation for the aircraft fleet maintenance scheduling problem have not been discussed in the literature. This study is a response to this research gap. We conduct a comparison of a MILP and a DRL scheduling model, which are used to derive the optimal maintenance schedule for various maintenance scenarios for aircraft fleets of different sizes in a disruptive environment, while considering health prognostics and the available resources for the execution of each task. The quality of solutions is evaluated on the basis of four planning objectives, defined according to real airline practice. The results show that the DRL approach achieves better results with respect to scheduling of prognostics-driven tasks and requires less computational time, whereas the MILP model produces more stable maintenance schedules and induces less maintenance ground time. Overall, the comparison provides valuable insights for the integration of health prognostics in airline maintenance practice.","condition-based maintenance (CBM); partially observable markov decision process (POMDP); partially observable Monte Carlo planning (POMCP); deep reinforcement learning (DRL); mixed integer linear programming (MILP); planning under uncertainty","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:17490fa1-ec01-438d-afe5-f0ae3100c3a6","http://resolver.tudelft.nl/uuid:17490fa1-ec01-438d-afe5-f0ae3100c3a6","Photo-electrocatalytic based removal of acetaminophen: Application of visible light driven heterojunction based BiVO4/BiOI photoanode","Ali, A.Z. (TU Delft Sanitary Engineering); Wu, Y. (Student TU Delft); Doekhi-Bennani, Y. (TU Delft Sanitary Engineering); van der Hoek, J.P. (TU Delft Sanitary Engineering; Waternet)","","2023","The presence of organic micro-pollutants (OMPs) in wastewater treatment effluents is becoming a major threat to the water safety for aquatic and human health. Photo-electrocatalytic based advanced oxidation process (AOP) is one of the emerging and effective techniques to degrade OMPs through oxidative mechanism. This study investigated the application of heterojunction based BiVO4/BiOI photoanode for acetaminophen (40 μg L−1) removal in demineralized water. Photoanodes were fabricated by electrodeposition of BiVO4 and BiOI photocatalytic layers. Optical (UV–vis diffusive reflectance spectroscopy), structural (XRD, SEM, EDX) and opto-electronic (IPCE) characterization confirmed the successful formation of heterojunction for enhanced charge separation efficiency. The heterojunction photoanode showed incident photon to current conversion efficiency of 16% (λmax = 390 nm) at an external voltage of 1 V under AM 1.5 standard illumination. The application of the BiVO4/BiOI photoanode in the removal of acetaminophen at 1 V (external bias) vs Ag/AgCl under simulated sunlight showed 87% removal efficiency within the first 120 min compared to 66% removal efficiency of the BiVO4 photoanode. Similarly, combining BiVO4 and BiOI exhibited 57% increase in first order removal rate coefficient compared to BiVO4. The photoanodes also showed moderate stability and reusability by showing 26% decrease in overall degradation efficiency after three cycles of each 5 h experiment. The results obtained in this study can be considered as a stepping stone towards the effective removal of acetaminophen as an OMP present in wastewater.","Organic micro-pollutants; Advanced oxidation process; Photoelectrocatalysis; Heterojunction photoanodes; BiVO4/BiOI; Acetaminophen","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:0f1ac556-4319-4eee-9147-290d49bb0fe4","http://resolver.tudelft.nl/uuid:0f1ac556-4319-4eee-9147-290d49bb0fe4","Digitization of chemical process flow diagrams using deep convolutional neural networks","Theisen, M.F. (TU Delft ChemE/Product and Process Engineering); Nishizaki Flores, K.F. (TU Delft ChemE/Product and Process Engineering); Schulze Balhorn, L. (TU Delft ChemE/Product and Process Engineering); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering)","","2023","Advances in deep convolutional neural networks led to breakthroughs in many computer vision applications. In chemical engineering, a number of tools have been developed for the digitization of Process and Instrumentation Diagrams. However, there is no framework for the digitization of process flow diagrams (PFDs). PFDs are difficult to digitize because of the large variability in the data, e.g., there are multiple ways to depict unit operations in PFDs. We propose a two-step framework for digitizing PFDs: (i) unit operations are detected using a deep learning powered object detection model, (ii) the connectivities between unit operations are detected using a pixel-based search algorithm. To ensure robustness, we collect and label over 1000 PFDs from diversified sources including various scientific journals and books. To cope with the high intra-class variability in the data, we define 47 distinct classes that account for different drawing styles of unit operations. Our algorithm delivers accurate and robust results on an independent test set. We report promising results for line and unit operation detection with an Average Precision at 50 percent (AP50) of 88% and an Average Precision (AP) of 68% for the detection of unit operations.","Deep convolutional neural network; Digitalization; Flowsheet digitization; Machine learning; Object detection; Process flow diagrams (PFD)","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:334c7661-a53b-4f06-9a8e-8033b2095bba","http://resolver.tudelft.nl/uuid:334c7661-a53b-4f06-9a8e-8033b2095bba","Automation on thermal control of blast furnace","Masuda, Ryosuke (JFE Steel Corp.); Hashimoto, Y. (JFE Steel Corp.); Mulder, Max (TU Delft Control & Simulation); van Paassen, M.M. (TU Delft Control & Simulation); Kano, Manabu (Kyoto University)","","2023","Accurate process control through automation is the key to achieving efficient and stable operation of a blast furnace. In this study, we developed an automatic control system of hot metal temperature (HMT). To cope with the slow and complex process dynamics of the blast furnace, we constructed a control algorithm that predicts eight-hour-ahead HMT using a two-dimensional (2D) transient model and calculates optimal target pulverized coal ratio (PCR) and pulverized coal flow rate by non-linear model predictive control (NMPC). An evaluation in a real plant showed that the developed control system suppressed the effects of disturbances, such as changes in the coke ratio and blast volume, on the HMT. The root mean square (RMS) of the control deviation of HMT was successfully reduced by 1.6 °C compared to the conventional manual operation.","Hot metal temperature; Industrial application; Model predictive control; Process control","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:630778e8-9778-49a1-8d79-70cf5d338175","http://resolver.tudelft.nl/uuid:630778e8-9778-49a1-8d79-70cf5d338175","Thermomechanical Oriented Reliability Enhancement of Si MOSFET Panel-Level Packaging Fusing Ant Colony Optimization With Backpropagation Neural Network","Jiang, Jing (Fudan University); Chen, Wei (Fudan University); Qian, Yichen (Hohai University); Meda, Abdulmelik H. (The Hong Kong Polytechnic University); Fan, X. (Lamar University); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials); Fan, J. (Fudan University)","","2023","Considerable advancements in power semiconductor devices have resulted in such devices being increasingly adopted in applications of energy generation, conversion, and transmission. Hence, we proposed a fan-out panel-level packaging (FOPLP) design for 30-V Si-based metal-oxide-semiconductor field-effect transistor (MOSFET). To achieve superior reliability of packaging, we applied the nondominated sorting genetic algorithm with elitist strategy (NSGA-II) and ant colony optimization-backpropagation neural network (ACO-BPNN) to optimize the design of redistribution layer (RDL) in FOPLP. We first quantified the thermal resistance and thermomechanical coupling stress of the designed package under thermal cycling loading. Next, NSGA-II and ACO-BPNN were used to optimize the size of the RDL blind via. Finally, the effectiveness of the proposed reliability optimization methods was verified by performing thermal shock reliability aging tests on the prepared devices.","Ant Colony Neural Network; Fan-out panel-level packaging; Genetic Algorithm; MOSFET; Packaging; Power device; Reliability; Reliability optimization; Stress; Thermal resistance; Thermal stresses; Thermomechanical processes","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-10-09","","","Electronic Components, Technology and Materials","","",""
"uuid:109da31e-54c6-47b2-8b6e-8232daa5eb72","http://resolver.tudelft.nl/uuid:109da31e-54c6-47b2-8b6e-8232daa5eb72","Graph Greenifier: Towards Sustainable and Energy-Aware Massive Graph Processing in the Computing Continuum","Iosup, Alexandru (Vrije Universiteit Amsterdam); Prodan, Radu (Aau Klagenfurt, Klagenfurt); Varbanescu, Ana Lucia (University of Twente); Talluri, Sacheendra (Aau Klagenfurt, Klagenfurt); Magalhaes, Gilles (Aau Klagenfurt, Klagenfurt); Hokstam, Kailhan (Aau Klagenfurt, Klagenfurt); Zwaan, Hugo (Vrije Universiteit Amsterdam); van Beek, V.S. (TU Delft Dataintensive Systems); Farahani, Reza (Aau Klagenfurt, Klagenfurt)","","2023","Our society is increasingly digital, and its processes are increasingly digitalized. As an emerging technology for the digital society, graphs provide a universal abstraction to represent concepts and objects, and the relationships between them. However, processing graphs at a massive scale raises numerous sustainability challenges; becoming energy-aware could help graph-processing infrastructure alleviate its climate impact. Graph Greenifier aims to address this challenge in the conceptual framework offered by the Graph Massivizer architecture. We present an early vision of how Graph Greenifier could provide sustainability analysis and decision-making capabilities for extreme graph-processing workloads. Graph Greenifier leverages an advanced digital twin for data center operations, based on the OpenDC open-source simulator, a novel toolchain for workload-driven simulation of graph processing at scale, and a sustainability predictor. The input to the digital twin combines monitoring of the information and communication technology infrastructure used for graph processing with data collected from the power grid. Graph Greenifier thus informs providers and consumers on operational sustainability aspects, requiring mutual information sharing, reducing energy consumption for graph analytics, and increasing the use of electricity from renewable sources.","computing continuum; digital twin; energy-awareness; graph greenifier; graph massivizer; graph processing; scalability; sustainability","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Dataintensive Systems","","",""
"uuid:10b80a37-9563-4770-8ed4-6140294446d9","http://resolver.tudelft.nl/uuid:10b80a37-9563-4770-8ed4-6140294446d9","Renovation process challenges and barriers: addressing the communication and coordination bottlenecks in the zero-energy building renovation workflow in European residential buildings","Prieto Hoces, A.I. (TU Delft Design of Constrution); Armijos Moya, T.E. (TU Delft Architectural Technology); Konstantinou, T. (TU Delft Architectural Technology)","","2023","The implementation of Nearly Zero-Energy Buildings (NZEB) renovation packages in Europe needs to be accelerated to meet the current decarbonization goals. To achieve this level of performance, building renovation strategies should shift towards solutions that incorporate a multitude of passive and active components, increasing the complexity and costs of the execution. Moreover, it requires the involvement of different stakeholders of the building supply-chain, resulting in additional difficulties in communication and coordination processes. To address this challenge, the present study aims at mapping the renovation process in digital platforms and addressing the respective bottlenecks. In terms of renovation process, several digital platforms were analysed to identify the type of information that the stakeholders require during the different renovation phases. By structuring the information along the renovation process phases, the different stakeholders can identify when the information can be provided and how the different type of information links to each other.","Building renovation process; building stakeholders; digital technology; information flow; questionnaire; retrofitting; zero-energy buildings","en","journal article","","","","","","","","","","","Design of Constrution","","",""
"uuid:a6b222eb-fa9b-4157-9bb7-c0e181f7c471","http://resolver.tudelft.nl/uuid:a6b222eb-fa9b-4157-9bb7-c0e181f7c471","Linear Clustering Process on Networks","Jokic, I. (TU Delft Network Architectures and Services); Van Mieghem, P.F.A. (TU Delft Network Architectures and Services)","","2023","We propose a linear clustering process on a network consisting of two opposite forces: attraction and repulsion between adjacent nodes. Each node is mapped to a position on a one-dimensional line. The attraction and repulsion forces move the nodal position on the line, depending on how similar or different the neighbourhoods of two adjacent nodes are. Based on each node position, the number of clusters in a network and each node's cluster membership is estimated. The performance of the proposed linear clustering process is benchmarked on synthetic networks against widely accepted clustering algorithms such as modularity, Leiden method, Louvain method and the non-back tracking matrix. The proposed linear clustering process outperforms the most popular modularity-based methods, such as the Louvain method, on synthetic and real-world networks, while possessing a comparable computational complexity.","Communities; graph clustering; modularity; linear process","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-11-01","","","Network Architectures and Services","","",""
"uuid:f2a0d9f1-2612-4246-971e-ce0b52c5ed0a","http://resolver.tudelft.nl/uuid:f2a0d9f1-2612-4246-971e-ce0b52c5ed0a","Inverse-designed growth-based cellular metamaterials","van 't Sant, S. (Student TU Delft); Thakolkaran, P. (TU Delft Team Sid Kumar); Martínez, Jonàs (Lorraine University); Kumar, Siddhant (TU Delft Team Sid Kumar)","","2023","Advancements in machine learning have sparked significant interest in designing mechanical metamaterials, i.e., materials that derive their properties from their inherent microstructure rather than just their constituent material. We propose a data-driven exploration of the design space of growth-based cellular metamaterials based on star-shaped distances. These two-dimensional metamaterials are based on periodically-repeating unit cells consisting of material and void patterns with non-trivial geometries. Machine learning models exploiting large datasets are then employed to inverse design growth-based metamaterials for tailored anisotropic stiffness. Firstly, a forward model is created to bypass the growth and homogenization process and accurately predict the mechanical properties given a finite set of design parameters. Secondly, an inverse model is used to invert the structure–property maps and enable the accurate prediction of designs for a given anisotropic stiffness query. We successfully demonstrate the frameworks’ generalization capabilities by inverse designing for stiffness properties chosen from outside the domain of the design space.","Cellular metamaterials; Machine learning; Inverse Design; Growth process","en","journal article","","","","","","","","","","","Team Sid Kumar","","",""
"uuid:dc8d5660-100f-486f-86d6-4c573d4109a4","http://resolver.tudelft.nl/uuid:dc8d5660-100f-486f-86d6-4c573d4109a4","From Requirements to Product: an MBSE Approach for the Digitalization of the Aircraft Design Process","Bruggeman, A.M.R.M. (TU Delft Flight Performance and Propulsion); la Rocca, G. (TU Delft Flight Performance and Propulsion)","","2023","During the aircraft conceptual design phase, many different design options need to be explored and compared in a short time frame. To speed up this process, efforts have been made in the past decades to digitalize parts of the design process, with a focus on the automation of the repetitive and non-creative tasks inherent to the iterative design process. Whilst many of the newly developed methodologies focus on specific parts of the design process, a holistic model-based design framework, incorporating the latest design technology developments, is lacking. To fill this gap, this paper presents the latest version of the Design and Engineering Engine (DEE) framework, originally proposed in the early 2000s and progressively matured through the experience of several international research collaborations. The DEE enables the setup and execution of Multidisciplinary Design Analysis and Optimization (MDAO) problems for aircraft (sub)systems, leveraging the automated, rule-based modeling capabilities offered by Knowledge-Based Engineering (KBE) and recent developments in the automatic formulation and integration of MDAO workflows. While the traditional MDAO process focuses on a given product architecture, the DEE allows also architectural design studies and makes use of Model-Based Systems Engineering (MBSE) principles to address the whole design process, from requirements modeling up to the automatic verification of the requirements. In practice, the DEE provides a single conceptual framework or template from which specific design framework instances can be formulated and executed, according to the user's needs. This paper describes the DEE architecture and its implementation concepts. Furthermore, it demonstrates the application of the DEE template to four different scenarios, ranging from a simple requirement verification study, up to the simultaneous synthesis and optimization of an aircraft system and its production process, including multiple system architecture options.","Model Based Systems Engineering; Multidisciplinary Design Optimization; Aircraft Design Process","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-03-04","","","Flight Performance and Propulsion","","",""
"uuid:4a5a1ff1-617c-49dd-bf96-13c163f032a2","http://resolver.tudelft.nl/uuid:4a5a1ff1-617c-49dd-bf96-13c163f032a2","Cities for citizens! Public value spheres for understanding conflicts in urban planning","Herzog, Rico (Student TU Delft; HafenCity University Hamburg); Goncalves, J. E. (TU Delft Spatial Planning and Strategy); Slingerland, G. (TU Delft Urban Studies); Kleinhans, R.J. (TU Delft Urban Studies); Prang, Holger (HafenCity University Hamburg); Brazier, F.M. (TU Delft System Engineering); Verma, T. (TU Delft Policy Analysis)","","2023","Identifying the diverse and often competing values of citizens, and resolving the consequent public value conflicts, are of significant importance for inclusive and integrated urban development. Scholars have highlighted that relational, value-laden urban space gives rise to many diverse conflicts that vary both spatially and temporally. Although notions of public value conflicts have been conceived in theory, there are few empirical studies that identify such values and their conflicts in urban space. Building on public value theory and using a case-study mixed-methods approach, this paper proposes a new approach to empirically investigate public value conflicts in urban space. Using unstructured participatory data of 4528 citizen contributions from a Public Participation Geographic Information Systems in Hamburg, Germany, natural language processing and spatial clustering techniques are used to identify areas of potential value conflicts. Four expert interviews assess and interpret these quantitative findings. By integrating quantitative assessments with the qualitative findings of the interviews, we identify 19 general public values and nine archetypical conflicts. On the basis of these results, this paper proposes a new conceptual model of ‘Public Value Spheres’ that extends the understanding of public value conflicts and helps to further account for the value-laden nature of urban space.","natural language processing; public participation; public values; spatial conflict; urban planning","en","journal article","","","","","","","","","","","Spatial Planning and Strategy","","",""
"uuid:83c13de5-0992-4e3e-b1a6-4b224f77c88e","http://resolver.tudelft.nl/uuid:83c13de5-0992-4e3e-b1a6-4b224f77c88e","Turning waste into value: eco-efficient recovery of by-products from biomass pretreatment in lignocellulosic biorefineries","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","This original research contributes to enhancing the viability of biorefineries through recovering valuable by-products from the liquid remaining after the biomass pretreatment by hot liquid water. A novel downstream processing method is developed for the recovery of acetic acid, formic acid, furfural and 5-hydroxymethylfurfural (HMF) by enhanced distillation. The major challenge in this research is the processing of the highly diluted initial solution (>96 wt% water) and the thermodynamic limitations owing to possible formation of several azeotropes. This new process recovers 78.7% of the acetic acid (99.8 wt%), while the rest of it is recycled back to the biomass pretreatment step together with most of the separated water from the initial solution. Over 99.5% of formic acid, furfural and HMF is also recovered, at purities of 74.7, 98.0 and 100 wt%, respectively. Vapor recompression and heat integration are implemented to decrease the energy use. The results demonstrate a 77.4% decrease in total annual costs (from $3.44 to 0.78/kgproduct), a 75.0% reduction in minimum average selling price (from $3.50 to 0.87/kgproduct), an 81.1% reduction in energy requirements (from 77.41 to 14.66 kWthh/kgproduct) and an up to 99.7% decrease in CO2 emissions (from 11.17 to 0.03 kgCO2/kgproduct).","biomass pretreatment; biorefineries; by-products recovery; downstream processing; lignocellulosic biomass","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:f589d3b0-dcf3-4ebf-9992-7d773d2c710c","http://resolver.tudelft.nl/uuid:f589d3b0-dcf3-4ebf-9992-7d773d2c710c","Evaluating railway track stiffness using axle box accelerations: A digital twin approach","Shen, C. (TU Delft Railway Engineering); Zhang, P. (TU Delft Railway Engineering); Dollevoet, R.P.B.J. (TU Delft Railway Engineering); Zoeteman, A. (ProRail); Li, Z. (TU Delft Railway Engineering)","","2023","While various train-borne techniques have been developed for measuring railway track stiffness, differentiating stiffness at different track layers remains a challenge. This study proposes a digital twin framework for the vehicle–track interaction system, which enables track stiffness evaluations based on axle box accelerations (ABA). The digital twin consists of a physics-based model, a model library and data-driven models. Compared to existing techniques, the proposed method simultaneously evaluates the stiffness of the railpad, sleeper and ballast layers at a sleeper spacing resolution, while being robust to varying track conditions, such as track irregularities and vehicle speeds. This is accomplished by employing a localized frequency-domain ABA feature capable of distinguishing between the characteristics of different track layers. Furthermore, track stiffness is evaluated in near real-time. This is achieved using a model library derived from physics-based simulations of a range of track conditions. Two data-driven models that can quickly select or interpolate model instances contained in the library are developed. During operation, the data-driven models use the measured ABA features as input and then infer the stiffness for the different track layers. The proposed method is applied to evaluate the track stiffness of a downscale test rig in a case study. The track stiffness evaluated by the proposed method is compared with that obtained through hammer tests and with the observations of the track component conditions. These comparisons show that the proposed method can capture the stiffness variations due to periodically fastened clamps and substructure misalignments at different speeds. In addition, the proposed method is demonstrated to be superior to the commonly used hammer test method for evaluating track stiffness under loaded conditions.","Railway track stiffness; Axle box acceleration; Digital twin; Physics-based simulation; Gaussian process regression","en","journal article","","","","","","","","","","","Railway Engineering","","",""
"uuid:4186ffbd-3151-4cd9-8ff7-6f05728a0eee","http://resolver.tudelft.nl/uuid:4186ffbd-3151-4cd9-8ff7-6f05728a0eee","Supporting Children’s Metacognition with a Facial Emotion Recognition based Intelligent Tutor System","Ruan, Xingran (University of Edinburgh); Palansuriya, Charaka (University of Edinburgh); Constantin, Aurora (University of Edinburgh); Tsiakas, K. (TU Delft Human Information Communication Design)","","2023","The present study aims to investigate the relationship between emotions experienced during learning and metacognition in typically developing (TD) children and those with autism spectrum disorder (ASD). This will assist us in using machine learning (ML) to develop a facial emotion recognition (FER) based intelligent tutor system (ITS) to support children’s metacognitive monitoring process in order to enhance their learning outcomes. In this paper, we first report the results of our preliminary research, which utilized an ML-based FER algorithm to detect four spontaneous epistemic emotions (i.e., neutral, confused, frustrated, and boredom) and six spontaneous basic emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise). Subsequently, we adapted an application (‘BrainHood’) to create the ‘Meta-BrainHood’, that embedded our proposed ML-based FER algorithm to examine the relationship between facial emotion expressions and metacognitive monitoring performance in TD children and those with ASD. Finally, we outline the future steps in our research, which adopts the outcomes of the first two steps to construct an ITS to improve children’s metacognitive monitoring performance and learning outcomes.","facial emotion recognition; Intelligent tutor system; learning outcomes; metacognitive monitoring process","en","conference paper","Association for Computing Machinery (ACM)","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-19","","","Human Information Communication Design","","",""
"uuid:891074de-6650-4fa5-92f7-0dfddfb5f1bc","http://resolver.tudelft.nl/uuid:891074de-6650-4fa5-92f7-0dfddfb5f1bc","Graph-Time Convolutional Neural Networks: Architecture and Theoretical Analysis","Sabbaqi, M. (TU Delft Multimedia Computing); Isufi, E. (TU Delft Multimedia Computing)","","2023","Devising and analysing learning models for spatiotemporal network data is of importance for tasks including forecasting, anomaly detection, and multi-agent coordination, among others. Graph Convolutional Neural Networks (GCNNs) are an established approach to learn from time-invariant network data. The graph convolution operation offers a principled approach to aggregate information and offers mathematical analysis by exploring tools from graph signal processing. This analysis provides insights into the equivariance properties of GCNNs; spectral behaviour of the learned filters; and the stability to graph perturbations, which arise from support perturbations or uncertainties. However, extending the convolutional learning and respective analysis to the spatiotemporal domain is challenging because spatiotemporal data have more intrinsic dependencies. Hence, a higher flexibility to capture jointly the spatial and temporal dependencies is required to learn meaningful higher-order representations. Here, we leverage product graphs to represent the spatiotemporal dependencies in the data and introduce Graph-Time Convolutional Neural Networks (GTCNNs) as a principled architecture. We also introduce a parametric product graph to learn the spatiotemporal coupling. The convolution principle further allows a similar mathematical tractability as for GCNNs. In particular, the stability result shows GTCNNs are stable to spatial perturbations. owever, there is an implicit trade-off between discriminability and robustness; i.e., the more complex the model, the less stable. Extensive numerical results on benchmark datasets corroborate our findings and show the GTCNN compares favorably with state-of-the-art solutions. We anticipate the GTCNN to be a starting point for more sophisticated models that achieve good performance but are also fundamentally grounded.","Convolution; Convolutional neural networks; Data models; Graph convolutional neural networks; graph signal processing; graph-time neural networks; Numerical stability; Perturbation methods; Spatiotemporal phenomena; Stability analysis; stability to perturbations","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-03","","","Multimedia Computing","","",""
"uuid:0078e179-a180-440d-984f-b9e0485100e3","http://resolver.tudelft.nl/uuid:0078e179-a180-440d-984f-b9e0485100e3","The Dodecahedron and the Basket of Fruit: Architecture in the Age of Artificial Intelligence","Corbo, S. (TU Delft Space & Type)","","2023","Starting from the late 1980s, the advent of digital design—the possibility to ideate, develop, and generate projects via computers—has progressively pushed the disciplinary discourse to rethink architecture’s role in society, as well as its formal manifestations. The contemporary evolution of digital architecture has taken different directions, which are sometimes contradictory and ambiguous in their intents. This paper especially focuses attention on one of those directions—the opportunities that artificial intelligence can offer in the future production and communication of architecture. Recent episodes are analysed and contextualised within the historical antinomy between two diverging worldviews that, since the fifteenth century until the end of the twentieth century, have informed the architectural discourse. These worldviews can be exemplified in the dichotomy between the dodecahedron and the basket of fruit.","artificial intelligence; digital culture; architecture; form; process","en","journal article","","","","","","","","","","","Space & Type","","",""
"uuid:9750fc4b-bcc1-4040-9841-c017080add67","http://resolver.tudelft.nl/uuid:9750fc4b-bcc1-4040-9841-c017080add67","Teaching 'how to sketch visual stories' to a professional audience: A Taxonomy of Visualisation Strategies","Hoftijzer, J.W. (TU Delft Human Information Communication Design); Carelsberg, H.M. (TU Delft Human Information Communication Design); Sypesteyn, M. (TU Delft Human Information Communication Design)","Buck, Lyndon (editor); Grierson, Hilary (editor); Bohemia, Erik (editor)","2023","There is a growing interest in the discipline of design sketching and drawing. Whereas its origin lies in the sketching and presenting of tangible (industrially designed) products, the discipline has, since approximately 2010, extended in various ways, along various dimensions. Various authors have addressed and discussed the most prominent change within the discipline since: the addition of so-called ‘story telling visuals’: sketches of processes, overviews, systems and e.g. journeys (Corremans and Mulder-Nijkamp 2019, Hoftijzer, Sypesteyn et al. 2020), also named ‘visual thinking’. In fact, sketching as a means of communication has grown across discipline borders, and, consequently, the activity of sketching for communication enjoys a growing group of actors and audience these days. The authors, being sketching practitioners and teachers, have been developing sketching course content aligned to this, both for the extending discipline (Bachelor and Master courses) of sketching within Industrial Design and for new audiences. One particular course, a so-called ‘Master Class’, which is an intensive two-day taking course to an external audience, focused on ‘how to sketch visual stories’, was subject to an experiment. Firstly, the course was designed according to specific requirements (audience, goals, pedagogy) and to previous insights of course development and evaluation, of workshops offered, and according to previously described vision and methodology that concerns the alignment between sketches of tangible things and sketches of abstract concepts (Hoftijzer, Sypesteyn et al. 2020). Secondly, in order to assess the logic and quality of the short course’s structure and contents, participants were asked to fill out a questionnaire. Together, this experimental set-up, the questionnaire results, and the sketched output of the Master Class have led to new insights, to new knowledge that will help improve the pedagogic approach of many of the current courses taught and to the follow up Master Class in particular.","Visualisation; Sketching; Visual-thinking; Process-sketching; Drawing","en","conference paper","The Design Society, Institution of Engineering Designers","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-03-07","","","Human Information Communication Design","","",""
"uuid:bf642704-ee39-4d2e-93ba-3735bd79bc62","http://resolver.tudelft.nl/uuid:bf642704-ee39-4d2e-93ba-3735bd79bc62","基于分段步进式弹塑性格构模型的混凝土破坏过程细观模拟","Zhang, H. (Shandong University); Jin, Zuquan (Qingdao University of Technology); Jiang, Nengdong (Shandong University); Ge, Zhi (Shandong University); Schlangen, E. (TU Delft Materials and Environment); Ling, Yifeng (Shandong University); Šavija, B. (TU Delft Materials and Environment); Wang, Zheng (Shandong Hi-Speed Group)","","2023","The classically lattice model assumes the local elements behave elastic brittle, neglecting the ductility of the mortar matrix. This leads to the simulated load⁃displacement response more brittle than the realistic. To solve the aforementioned issue, a piece⁃wise approach was introduced to describe the elastic⁃plastic constitutive relation of lattice element. The fracture process and the load⁃displacement response were obtained through the sequentially⁃linear solution approach. The model was calibrated using the uniaxial tension and compression tests. It is found that the model can precisely simulate the fracture process and load⁃displacement response. Moreover, the model was used to model the size effect in uniaxial tension and the influence of the specimen’s slenderness and boundary confinement on the fracture behavior under compression. It offers a new theoretical method and approach for studying the fracture of concrete.","concrete; elastic⁃plastic constitutive relation; fracture process; lattice model; meso⁃scale","zh","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-10-25","","","Materials and Environment","","",""
"uuid:3612ef4e-d9a2-4db0-873b-98f5b72c5e15","http://resolver.tudelft.nl/uuid:3612ef4e-d9a2-4db0-873b-98f5b72c5e15","Eco-efficiency improvements in the propylene-to-epichlorohydrin process","Madej, Łukasz (Student TU Delft); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","BACKGROUND: Epichlorohydrin (ECH) production is an important industrial process, owing to its importance in windmill blade manufacture, but it suffers from several drawbacks such as high energy use, large wastewater production and low atom efficiency. This original study investigates a novel chlorohydrin-free technology with an enhanced separation system for ECH production. Rigorous process simulations were performed in Aspen Plus for the classic and novel processes, and a fair techno-economic and sustainability comparison was made between the new catalytic oxidation route and the classic chlorohydrin process. RESULTS: For the hydrogen peroxide (HP) process route, a novel separation system was developed using methanol as solvent, which enables high purity of ECH. Moreover, allyl chloride (ACH) purification was optimized using thermally coupled distillation to improve the energy efficiency of ACH production. The novel HP process provides 88% higher atom efficiency, about 10% higher yield and a smaller amount of by-products, as well as a 13% increase in production capacity and major savings of 98% in wastewater production, while also achieving lower energy use (<40 MJ kg−1 ECH) and reduced carbon dioxide emission (1.13 kg kg−1 ECH). CONCLUSION: The developed HP process route is feasible and economically viable. Also, it can be partly retrofitted to existing ECH plants based on the chlorohydrin route. As both processes use the same intermediate product, only the ECH part of a classic process would be replaced by the novel route, while keeping the common ACH part. This approach is the most profitable, as only 55% of capital expenditure is required for this modification, while the plant would benefit from all the improvements provided by the novel process.","dividing-wall column; energy efficiency; fluid separation; process intensification","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:79740560-7571-49ef-9b45-81b696bbd4e3","http://resolver.tudelft.nl/uuid:79740560-7571-49ef-9b45-81b696bbd4e3","Lower-Temperature-Ready Renovation:: An Approach to Identify the Extent of Renovation Interventions for Lower-Temperature District Heating in Existing Dutch Homes","Wahi, P. (TU Delft Environmental & Climate Design); Konstantinou, T. (TU Delft Architectural Technology; TU Delft Design of Constrution); Tenpierik, M.J. (TU Delft Environmental & Climate Design); Visscher, H.J. (TU Delft Design & Construction Management)","","2023","This study presents an approach to determine the extent of renovation interventions required for existing Dutch dwellings aiming to transition to lower-temperature district heating (DH) systems. The proposed method is applied to a typical intermediate terraced house built before 1945 in the Netherlands, and it consists of two steps: first, assessing the potential of a dwelling to be heated with a lower temperature supply from DH systems and subsequently developing and evaluating alternative renovation solutions if necessary. This study defines a set of criteria for evaluating the readiness of a dwelling for lower-temperature heating (LTH), considering energy efficiency and thermal comfort as non-compensatory criteria. The application of the approach reveals that the case study dwelling is presently unsuitable for a medium-temperature (70/50 °C) and low-temperature (55/35 °C) supply compared to a high-temperature supply (90/70 °C), thus requiring energy renovations. Furthermore, this study indicates that moderate intervention levels are required for the dwelling to be lower-temperature-ready in both supply temperature goals. These interventions include strategies and measures that upgrade the building envelope to the minimum insulation levels stipulated by the Dutch Building Decree, improve airtightness, and replace existing radiators with low-temperature radiators. By systematically narrowing down renovation options, this approach aids in simplifying the decision-making process for selecting renovations for heating dwellings with LTH through DH systems, which could reduce stakeholders’ decision paralysis.","District heating; Lower temperature heating; Renovation; Existing housing stock; Decision-Making Process","en","journal article","","","","","","","","","","","Environmental & Climate Design","","",""
"uuid:3dfb4bfb-365a-4c5d-b4e5-0933314ff70f","http://resolver.tudelft.nl/uuid:3dfb4bfb-365a-4c5d-b4e5-0933314ff70f","Using design thinking to explore teaching problems in Chilean schools","Bravo, Úrsula (Universidad del Desarrollo); Cortés, Catalina (Universidad del Desarrollo); Lloyd, P.A. (TU Delft Methodologie en Organisatie van Design); Jones, Derek (Open University)","","2023","Educational systems face increasingly complex demands, confronting teachers with multidimensional people-centred problems rarely solved by linear or standardised solutions. Nevertheless, teachers must juggle multiple variables simultaneously in their daily work. This can lead to routine and unreflective decisions that do not consider unique situations. Considering that designers’ abductive reasoning could support problem-framing skills, this article discusses how a design thinking approach can contribute to developing reflective teaching practice. This case study explores how 20 Chilean teachers define, frame, and re-frame their pedagogical problems in a design-based teacher professional development programme. Findings revealed three problem-framing triggers that support teachers’ reflection: (a) collaborative discussions, (b) awareness of people and their context, and (c) visualising, making, and testing ideas. Combined, they articulate action and promote reflection, demonstrating the value of a design thinking approach in supporting teachers’ pedagogical decisions.","Chilean teachers; design thinking; problem framing and reframing; Reflective process; reflective teaching","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-10","","","Methodologie en Organisatie van Design","","",""
"uuid:25bbc0d7-038e-44d7-88f8-d4c64f4f034a","http://resolver.tudelft.nl/uuid:25bbc0d7-038e-44d7-88f8-d4c64f4f034a","Continuous Human Activity Classification with Radar Point Clouds and Point Transformer Networks","Kruse, N.C. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","Due to numerous benefits, radar is considered as an important sensor for human activity classification. The problem of classifying continuous sequences of activities of unconstrained duration has been studied in this work. To tackle this challenge, a radar data processing method utilizing point transformer networks has been proposed. The method has been experimentally verified on a dataset of human activities, and experiments have been performed to determine its optimal implementation. Promising preliminary results on a 9-class dataset show test accuracy and macro F-1 scores in the range of 83% and 73% respectively.","Human activity recognition; machine learning; radar; point cloud processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-26","","","Microwave Sensing, Signals & Systems","","",""
"uuid:4de2466a-f615-47d2-a263-420d78e490ab","http://resolver.tudelft.nl/uuid:4de2466a-f615-47d2-a263-420d78e490ab","Continuous People Crowd Monitoring defined as a Regression Problem using Radar Networks","Guendel, Ronny (TU Delft Microwave Sensing, Signals & Systems); Ullmann, I. (Friedrich-Alexander-Universität Erlangen-Nürnberg); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","Radar-based human activity recognition in crowded environments using regression approaches is addressed. Whereas previous research has focused on single activities and subjects, the problem of continuous activity recognition involving up to five individuals moving in arbitrary directions in an indoor area is introduced. To treat the problem, a regression-based approach is used, which offers innovative insights into creating robust and accurate systems for monitoring human activities.Novel approaches utilizing LSTM or CNN regression techniques with Linear Regression and Support Vector Machine regressor are compared on extracted features from radar data through the Histogram of Oriented Gradients and Principal Component Analysis. These approaches are rigorously evaluated by a Leave-One-Group-Out method, with performance assessed using common regression metrics such as the RMSE. The most promising outcomes were observed for crowds of three and five individuals, with respective RMSE of approximately 0.4 and 0.6. These results were primarily achieved by utilizing the micro-Doppler (µD) Spectrogram or range-Doppler data domain.","Radar Signal Processing; Multiple People Monitoring; Distributed Radar; Machine Learning; Deep Learning; Histogram of Oriented Gradients; Principal Component Analysis; Regression; LSTM; CNN","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-26","","","Microwave Sensing, Signals & Systems","","",""
"uuid:4666294f-c331-49e5-aadb-effaea8d86d6","http://resolver.tudelft.nl/uuid:4666294f-c331-49e5-aadb-effaea8d86d6","An adaptive threshold-based unambiguous robust Doppler beam sharpening algorithm for forward-looking MIMO Radar","Yuan, S. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","The ambiguity problem in forward-looking Doppler beam sharpening is considered. Doppler beam sharpening (DBS) has shown its potential to improve cross-range resolution for automotive radar applications. However, it suffers from ambiguities when targets are positioned symmetrically with respect to the vehicle trajectory. A new approach named 'Robust Unambiguous DBS with Adaptive Threshold' (RUDAT) is proposed to address the problem of ambiguities. It combines DBS with multiple-input-multiple-output (MIMO) radar processing, and is robust to non-ideal movements of the vehicle and fluctuations in the targets' reflectivity. The performance of the proposed method is compared to existing approaches using simulated data with point-like and extended targets, demonstrating good preliminary results.","Doppler beam sharpening; Beam scan; Forward-looking radar; MIMO radar processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-26","","","Microwave Sensing, Signals & Systems","","",""
"uuid:e3eff66e-b61f-4415-a2b5-63f4f5fc5ad1","http://resolver.tudelft.nl/uuid:e3eff66e-b61f-4415-a2b5-63f4f5fc5ad1","A Survey on Radar-Based Continuous Human Activity Recognition","Ullmann, Ingrid (Friedrich-Alexander-Universität Erlangen-Nürnberg); Guendel, Ronny (TU Delft Microwave Sensing, Signals & Systems); Kruse, N.C. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","Radar-based human motion and activity recognition is currently a topic of great research interest, as the aging population increases and older individuals prefer an independent lifestyle. This technology has a wide range of applications, such as fall detection in assisted living, gesture recognition for human-machine interfaces, and many more. Numerous studies exist on various approaches for radar-based activity capture and classification. However, most of these employ rather artificial data, often obtained in laboratory environments, and typically collected under particular conditions. Specifically, most research so far has aimed at distinguishing a predefined set of single activities with a defined start, stop and duration. This paper aims at drawing the attention to a so far less researched issue, one that will be of vital importance for future real-world application of radar-based human activity recognition: continuous activity recognition, i.e. recognizing specific activities in a stream of several sequential activities with unknown duration and arbitrary transitions between different classes of activities. A review on the current state of the art in this relatively new topic is given, followed by a discussion on future research directions.","Radar applications; radar signal processing; continuous human activity recognition; activities of daily living","en","journal article","","","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:fe52974a-7401-4dfd-832c-708db39102d0","http://resolver.tudelft.nl/uuid:fe52974a-7401-4dfd-832c-708db39102d0","Graph-Time Trend Filtering and Unrolling Network","Sabbaqi, M. (TU Delft Multimedia Computing); Isufi, E. (TU Delft Multimedia Computing)","","2023","Reconstructing missing values and removing noise from network-based multivariate time series requires developing graph-time regularizers capable of capturing their spatiotemporal behavior. However, current approaches based on joint spatiotemporal smoothness, diffusion, or variations thereof may not be effective for time series with discontinuities across the graph or time. To address this challenge, we propose a joint graph-time trend filter operating over a product graph representing spatiotemporal relations. Additionally, we develop a graph-time unrolled neural network to learn the prior from the data, which is based on the alternating direction method of multipliers iterations of the graph-time trend filter and on graph-time convolutional filters. Numerical tests with two synthetic and four real datasets corroborate the effectiveness of both approaches, highlight their inherent trade-offs, and show they compare well with state-of-the-art alternatives.","Graph-time signal processing; graph unrolled networks; trend filtering on graphs","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-01","","","Multimedia Computing","","",""
"uuid:5760cc99-d872-455e-ad16-992563ad617e","http://resolver.tudelft.nl/uuid:5760cc99-d872-455e-ad16-992563ad617e","Forecasting Graph Signals with Recursive MIMO Graph Filters","van der Hoeven, Jelmer (Student TU Delft); Natali, A. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2023","Forecasting time series on graphs is a fundamental problem in graph signal processing. When each entity of the network carries a vector of values for each time stamp instead of a scalar one, existing approaches resort to the use of product graphs to combine this multidimensional information, at the expense of creating a larger graph. In this paper, we show the limitations of such approaches, and propose extensions to tackle them. Then, we propose a recursive multiple-input multiple-output graph filter which encompasses many already existing models in the literature while being more flexible. Numerical simulations on a real world data set show the effectiveness of the proposed models.","Forecasting; Graph Signal Processing; Product Graph; Multi-dimensional graph signals","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-01","","","Signal Processing Systems","","",""
"uuid:f4c104e3-f9bb-4bd2-bbde-039f118e3883","http://resolver.tudelft.nl/uuid:f4c104e3-f9bb-4bd2-bbde-039f118e3883","Robust Pareto-Optimal Radar Receive Filter Design for Noise and Sidelobe Suppression","Kokke, C.A. (TU Delft Signal Processing Systems); Coutiño, Mario (TNO); Heusdens, R. (Netherlands Defence Academy); Leus, G.J.T. (TU Delft Signal Processing Systems); Anitori, Laura (TNO)","","2023","Integrated sidelobe level is a useful measure to quantify robustness of a waveform-filter pair to unknown range clutter and multiple closely located targets. Sidelobe suppression on receive will incur a loss in the signal to noise ratio after pulse compression. We derive a pulse compression filter that has the greatest integrated sidelobe suppression possible for a given acceptable signal to noise ratio loss. The solution is given in a closed form, which can be adjusted using a single parameter to chose between greater sidelobe or interference and noise suppression. We verify the derived filter using simulations, comparing it to other proposed mismatched filter designs. To expand the robustness of the filter, we additionally investigate noise uncertainty robustness. We derive two robustness measures for noise uncertainty and analyze the performance through simulation.","robust pulse compression; radar signal processing; filter optimization; integrated sidelobe ratio","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-01","","","Signal Processing Systems","","",""
"uuid:ab3dbb90-f436-4ca1-9fdf-e99cff86e333","http://resolver.tudelft.nl/uuid:ab3dbb90-f436-4ca1-9fdf-e99cff86e333","Advanced downstream processing of bioethanol from syngas fermentation","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","Syngas fermentation is used industrially to produce diluted bioethanol (about 1–6 wt%). This research study proposes a novel downstream process that recovers bioethanol in an energy-efficient and cost-effective manner, improves fermentation yield by recycling all fermentation broth components (microbes, acetate and water), and is designed for full-scale industrial-level application. Therefore, vacuum distillation at fermentation temperature was conceptually studied as an initial ethanol recovery step, leading to a bottom stream that may be recycled. Advanced separation and purification techniques were designed to recover 99.5% of initially present ethanol as high-purity product (99.8 wt%). Mechanical vapor recompression and heat integration methods were used to maximize sustainability and eco-efficiency of the proposed recovery process. Implementation of these techniques on a process using 6 wt% ethanol feed stream decreased the total annual costs by 54.2% (from 0.175 to 0.080 $/kgEtOH), reduced the primary energy requirement by 66.1% (from 2.82 to 0.96 kWthh/kgEtOH), lowered the CO2 emission by up to 82.6% (from 0.414 to 0.072 kgCO2/kgEtOH), and reduced the fresh water usage by 62.6% (from 0.242 to 0.091 m3W/kgEtOH). Sensitivity analysis for ethanol concentrations ranging from 6 to 1 wt% showed that the recovery costs and energy use increased to 0.336 $/kgEtOH and 1.78 kWthh/kgEtOH respectively. Since ethanol recovery performs better but fermentation will perform worse at higher ethanol concentration in fermentation broth, there is a trade-off concentration for the overall process. The current analysis is an important step toward determining this trade-off.","Bioethanol; Downstream processing; Fluid separation; Heat pumps; Syngas fermentation","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:5155f302-fe41-49b0-b3fc-daa791bd61ac","http://resolver.tudelft.nl/uuid:5155f302-fe41-49b0-b3fc-daa791bd61ac","Grouped People Counting Using mm-wave FMCW MIMO Radar","Ren, Liyuan (Student TU Delft); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2023","The problem of radar-based counting of multiple individuals moving as a single group is addressed using an mm-wave multiple-input-multiple-output (MIMO) frequency-modulated continuous wave (FMCW) radar. This problem is challenging because the different individuals are closer to each other than the range/azimuth resolution, and their bulk Doppler signatures are difficult to distinguish, as they tend to move together. A processing pipeline is proposed, based on the combination of a multiple target tracking algorithm with a classifier to track each group and count the number of people within. Specific salient features are defined for the classifier and extracted from range-azimuth maps and cadence velocity diagrams (CVDs). The proposed pipeline has been experimentally validated in several outdoor scenarios with grouped people. The results show that the combination of tracking algorithm and classifier in the proposed pipeline outperforms alternative methods from the literature as well as a commercial toolbox for people counting.","Feature extraction; Internet of Things; Legged locomotion; mm-wave radar; People Counting; Pipelines; Radar; radar signal processing; Radar tracking; Spectrogram; tracking and classification","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-01","","","Microwave Sensing, Signals & Systems","","",""
"uuid:771471f3-f1c6-4b15-925e-41b0b7494c8e","http://resolver.tudelft.nl/uuid:771471f3-f1c6-4b15-925e-41b0b7494c8e","Expectancy or Salience?—Replicating Senders’ Dial-Monitoring Experiments With a Gaze-Contingent Window","Eisma, Y.B. (TU Delft Human-Robot Interaction); Bakay, A. (TU Delft Teaching & Learning Services); de Winter, J.C.F. (TU Delft Human-Robot Interaction)","","2023","Introduction
In the 1950s and 1960s, John Senders carried out a number of influential experiments on the monitoring of multidegree-of-freedom systems. In these experiments, participants were tasked with detecting events (threshold crossings) for multiple dials, each presenting a signal with different bandwidth. Senders’ analyses showed a nearly linear relationship between signal bandwidth and the amount of attention paid to the dial, and he argued that humans sample according to bandwidth, in line with the Nyquist–Shannon sampling theorem.
Objective
The current study tested whether humans indeed sample the dials based on bandwidth alone or whether they also use salient peripheral cues.
Methods
A dial-monitoring task was performed by 33 participants. In half of the trials, a gaze-contingent window was used that blocked peripheral vision.
Results
The results showed that, without peripheral vision, humans do not effectively distribute their attention across the dials. The findings also suggest that, when given full view, humans can detect the speed of the dial using their peripheral vision.
Conclusion
It is concluded that salience and bandwidth are both drivers of distributed visual attention in a dial-monitoring task.
Application
The present findings indicate that salience plays a major role in guiding human attention. A subsequent recommendation for future human–machine interface design is that task-critical elements should be made salient.","distributed attention; supervisory control; attentional processes; eye movements; replication study; peripheral vision","en","journal article","","","","","","","","","","","Human-Robot Interaction","","",""
"uuid:835f1e55-6dcb-4e0c-8223-aefe1d26d368","http://resolver.tudelft.nl/uuid:835f1e55-6dcb-4e0c-8223-aefe1d26d368","Sensing and Machine Learning for Automotive Perception: A Review","Pandharipande, Ashish (NXP Semiconductors); Cheng, Chih Hong (Fraunhofer Iks); Dauwels, J.H.G. (TU Delft Signal Processing Systems); Gurbuz, Sevgi Z. (University of South Alabama); Ibanez-Guzman, Javier (Group Renault); Li, Guofa (Chongqing University); Piazzoni, Andrea (Nanyang Technological University); Wang, Pu (Mitsubishi Electric Research Laboratories); Santra, Avik (Infineon Technologies, North America)","","2023","Automotive perception involves understanding the external driving environment and the internal state of the vehicle cabin and occupants using sensor data. It is critical to achieving high levels of safety and autonomy in driving. This article provides an overview of different sensor modalities, such as cameras, radars, and light detection and ranging (LiDAR) used commonly for perception, along with the associated data processing techniques. Critical aspects of perception are considered, such as architectures for processing data from single or multiple sensor modalities, sensor data processing algorithms and the role of machine learning techniques, methodologies for validating the performance of perception systems, and safety. The technical challenges for each aspect are analyzed, emphasizing machine learning approaches, given their potential impact on improving perception. Finally, future research opportunities in automotive perception for their wider deployment are outlined.","Advanced driver assistance system (ADAS); automotive perception; autonomous driving; cameras; light detection and ranging (LiDAR); radars; safety; sensor data processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-09-25","","","Signal Processing Systems","","",""
"uuid:015b1223-18dd-4bee-8ee7-bee25baf18d9","http://resolver.tudelft.nl/uuid:015b1223-18dd-4bee-8ee7-bee25baf18d9","Shading calculation methods and regulation simplifications – The Portuguese case","Oliveira, Marta Fernandes (University of Minho); Mendonça, Paulo (University of Minho); Tenpierik, M.J. (TU Delft Environmental & Climate Design); Santiago, Pedro (Universidade Fernando Pessoa); Silva, José F. (Polytechnic Institute of Viana do Castelo); Silva, Lígia Torres (University of Minho)","","2023","How to analyse the omissions of thermal regulations and evaluate methodologies that provide building execution or thermal certificates that do not correspond to reality and usually incur costs? We can start by analysing different simulation methods and shading calculations that provide solar gains and shadow optimisation. After evaluating how the regulations define the calculation assumptions and how this calculation is performed, the discrepancies (simplifications) that the regulations allow or ignore are presented, and it is exemplified using two case studies. Using the Portuguese regulation as a case study, it leads to incorrect conclusions or assumptions due to unequal access to solar radiation or the shading factor calculation that experiences the omission of angles or time periods. Therefore, the aim is to propose a calculation process (premises) that minimises the discrepancies between simulation (optimisation strategy) and reality (applicability of strategies) for sustainable output.","solar shading; political-legislative premises; shading calculation methods; solar benefits; sustainable city; ineffective enforcement of regulations; process innovation","en","review","","","","","","","","","","","Environmental & Climate Design","","",""
"uuid:bb27aaff-fc77-4bae-ba20-5f14bc509917","http://resolver.tudelft.nl/uuid:bb27aaff-fc77-4bae-ba20-5f14bc509917","The process of value setting through co-design: the case of La Borda, Barcelona","Dos Santos Vieira Brysch, S.L. (TU Delft Real Estate Management); Garcia i Mateu, Adrià (Universitat Oberta de Catalunya); Czischke, D.K. (TU Delft Real Estate Management)","","2023","Against the increasing commodification of housing, a new kind of housing cooperatives has emerged in Catalonia in the last decade. These cooperatives fall within the wider concept of collaborative housing (CH), i.e. they are collectively self-organised projects based on a collaborative design process, or ‘co-design’. In such a process, residents need to adjust their individual expectations and demands in order to reach a collective set of values to realise their housing project. The aim of this paper is to assess how values are set through co-design and translated into a housing project. To do so, we develop an analytical framework to conduct a longitudinal single case-study that traces back the co-design process of the resident-led housing cooperative La Borda, in Barcelona. Our findings shed light on how co-design unfolds and uncover trade-offs carried out to overcome tensions mostly between individual and collective demands and between building costs and quality.","co-design process; Collaborative housing; cooperative housing; design for values; La Borda","en","journal article","","","","","","","","","","","Real Estate Management","","",""
"uuid:7c96c5e9-085f-4d3c-9f66-b92ac8d58441","http://resolver.tudelft.nl/uuid:7c96c5e9-085f-4d3c-9f66-b92ac8d58441","A 2D Ultrasound Phased-Array Transmitter ASIC for High-Frequency US Stimulation and Powering","Rivandi, H. (TU Delft Bio-Electronics); Lopes Marta da Costa, T.M. (TU Delft Bio-Electronics)","","2023","Ultrasound (US) neuromodulation and ultrasonic power transfer to implanted devices demand novel ultrasound transmitters capable of steering focused ultrasound waves in 3D with high spatial resolution and US pressure, while having a miniaturized form factor. Meeting these requirements needs a 2D array of ultrasound transducers directly integrated with a high-frequency 2D phased-array ASIC. However, this imposes severe challenges on the design of the ASIC. In order to avoid the generation of grating lobes, the elements in the 2D phased-array should have a pitch of half of the ultrasound wavelength, which, as frequency increases, highly reduces the area available for the design of high-voltage beamforming channels. This article addresses these challenges by presenting the system-level optimization and implementation of a high-frequency 2D phased-array ASIC. The system-level study focuses on the optimization of the US transmitter toward high-frequency operation while minimizing power consumption. This study resulted in the implementation of two ASICs in TSMC 180 nm BCD technology: firstly, an individual beamforming channel was designed to demonstrate the tradeoffs between frequency, driving voltage, and beamforming capabilities. Finally, a 12-MHz pitch matched 12 × 12 phased-array ASIC working at 20-V amplitude and 3-bit phasing was designed and experimentally validated, to demonstrate high-frequency phased-array operation. The measurement results verify the phasing functionality of the ASIC with a maximum DNL of 0.35 LSB. The CMOS chip consumes 130 mW and 26.6 mW average power during the continuous pulsing and delivering 200-pulse bursts with a PRF of 1 kHz, respectively.","2D ultrasound phased-array; Array signal processing; high-frequency focused ultrasound; high-voltage beamforming channel; Neuromodulation; Phased arrays; Spatial resolution; Transducers; Transmitters; Ultrasonic imaging; ultrasonically powered implanted devices; ultrasound neuromodulation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-01","","","Bio-Electronics","","",""
"uuid:3273e3df-ae1c-46af-acc9-ce1508e50b7d","http://resolver.tudelft.nl/uuid:3273e3df-ae1c-46af-acc9-ce1508e50b7d","An Experimental Study of Two-level Schwarz Domain-Decomposition Preconditioners on GPUs","Yamazaki, Ichitaro (Sandia National Laboratories); Heinlein, A. (TU Delft Numerical Analysis); Rajamanickam, Sivasankaran (Sandia National Laboratories, New Mexico)","O'Conner, L. (editor)","2023","The generalized Dryja–Smith–Widlund (GDSW) preconditioner is a two-level overlapping Schwarz domain decomposition (DD) preconditioner that couples a classical one-level overlapping Schwarz preconditioner with an energy-minimizing coarse space. When used to accelerate the convergence rate of Krylov subspace iterative methods, the GDSW preconditioner provides robustness and scalability for the solution of sparse linear systems arising from the discretization of a wide range of partial different equations. In this paper, we present FROSch (Fast and Robust Schwarz), a domain decomposition solver package which implements GDSW-type preconditioners for both CPU and GPU clusters. To improve the solver performance on GPUs, we use a novel decomposition to run multiple MPI processes on each GPU, reducing both solver’s computational and storage costs and potentially improving the convergence rate. This allowed us to obtain competitive or faster performance using GPUs compared to using CPUs alone. We demonstrate the performance of FROSch on the Summit supercomputer with NVIDIA V100 GPUs, where we used NVIDIA Multi-Process Service (MPS) to implement our decomposition strategy.The solver has a wide variety of algorithmic and implementation choices, which poses both opportunities and challenges for its GPU implementation. We conduct a thorough experimental study with different solver options including the exact or inexact solution of the local overlapping subdomain problems on a GPU. We also discuss the effect of using the iterative variant of the incomplete LU factorization and sparse-triangular solve as the approximate local solver, and using lower precision for computing the whole FROSch preconditioner. Overall, the solve time was reduced by factors of about 2× using GPUs, while the GPU acceleration of the numerical setup time depend on the solver options and the local matrix sizes.","Linear systems; Distributed processing; Scalability; Software algorithms; Graphics processing units; Supercomputers; Software","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-18","","","Numerical Analysis","","",""
"uuid:a2efd12e-b077-4683-bec2-c46c8104b187","http://resolver.tudelft.nl/uuid:a2efd12e-b077-4683-bec2-c46c8104b187","A cyclostratigraphic framework of the Upper Carboniferous Westoe and Cleaver formations in the southern North Sea Basin as a methodology for stratigraphic reservoir characterisation","Baars, T.F. (TU Delft Applied Geology); Huis in ‘t Veld, Richard (Argo Geological Consultants B.V.); Zhang, Linzhi (Student TU Delft); Koopmans, Maaike (Wintershall Noordzee B.V.); McLean, Duncan (MB Stratigraphy Limited); Martinius, A.W. (TU Delft Applied Geology); Abels, H.A. (TU Delft Applied Geology)","","2023","Orbital driven climate control on sedimentation produces regional, stratigraphically repetitive characters and so cyclostratigraphic correlation can improve correlation and identify stratigraphic trends in borehole sections. This concept is commonly used to correlate marine and lacustrine strata. However, in the alluvial domain, its use is more challenging because internal, local dynamics controlling sedimentation may interfere with the expression of cyclic climate forcing. Intervals of low net-to-gross may be important for successful application in this domain as they tend to better document regional changes. This study applies climate-based stratigraphic correlation concepts to improve well correlations, characterise vertical sand distribution, and identify potential reservoir targets in a generally low net-to-gross interval. Coarsening upward sedimentary repetitions (cyclothems) are identified and correlated with high certainty in nineteen well sections in the upper Carboniferous Westoe and Cleaver formations of the Silverpit Basin. Local sedimentary dynamics provide variability in the character of the cyclothems and several types of cyclothem are classified. Correlation of sections using cyclothems recognised on wireline logs is done twice: once manually and once semi-automatically. The semi-automated correlation is based on calculation of deviation curves which depict stratigraphic changes that are less dependent on absolute wireline values and follow vertical trends more clearly. The correlations provide composite stratigraphies that are analysed using vertical proportions curves. Both approaches yield similar results in terms of stratigraphic trends. However, for detailed correlation of wells, the manual correlation is better at accounting for any local variability within the system. The same two zones of higher net-to-gross ratios are found using both correlation methods. These are linked to palaeoclimatic changes driven by long eccentricity and the proposed climate stratigraphic model has predictive value for identifying sandstone occurrence. The climate-based stratigraphic correlation improves the assessment reservoir distribution and properties on small (10–20 m thickness) and large (100–200 m thickness) stratigraphical scales.","Allogenic and autogenic processes; Cyclothem; Fluvial architecture; Orbital climate change; Stratigraphic predictive models; Reservoir characterisation","en","journal article","","","","","","","","","","","Applied Geology","","",""
"uuid:0202635b-7033-4e7f-887a-82252f29cff8","http://resolver.tudelft.nl/uuid:0202635b-7033-4e7f-887a-82252f29cff8","Objects Classification and Clutter Types Mapping using Polarimetric Radar Detection Algorithms","Song, Yiyang (Student TU Delft); Krasnov, O.A. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","Starting from numerical simulation and comparative analysis of different polarimetric detector algorithms using the proposed Gain of Detectability measure, this paper has validated the feasibility and accuracy of polarimetric detectors in scenarios with homogeneous clutter. These algorithms’ application to real radar data with non-homogeneous clutter also shows that detection quality can be seriously improved using detectors that use a priori knowledge of the expected target and clutter polarimetric characteristics. A new application of the Polarimetric Whitening Filter and the Optimal Polarimetric Detector for the classification/mapping of targets and ground-based clutter has been proposed and demonstrated.","Sensitivity; Radar clutter; Signal processing algorithms; Radar detection; Detectors; Gain measurement; Filtering algorithms","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-11","","","Microwave Sensing, Signals & Systems","","",""
"uuid:6f391be2-a267-48c3-878b-3164bfeb7279","http://resolver.tudelft.nl/uuid:6f391be2-a267-48c3-878b-3164bfeb7279","Towards Understanding Machine Learning Testing in Practise","Shome, A. (TU Delft Software Engineering); Cruz, Luis (TU Delft Software Engineering); van Deursen, A. (TU Delft Software Technology)","","2023","Visualisations drive all aspects of the Machine Learning (ML) Development Cycle but remain a vastly untapped resource by the research community. ML testing is a highly interactive and cognitive process which demands a human-in-the-loop approach. Besides writing tests for the code base, bulk of the evaluation requires application of domain expertise to generate and interpret visualisations. To gain a deeper insight into the process of testing ML systems, we propose to study visualisations of ML pipelines by mining Jupyter notebooks. We propose a two prong approach in conducting the analysis. First, gather general insights and trends using a qualitative study of a smaller sample of notebooks. And then use the knowledge gained from the qualitative study to design an empirical study using a larger sample of notebooks. Computational notebooks provide a rich source of information in three formats - text, code and images. We hope to utilise existing work in image analysis and Natural Language Processing for text and code, to analyse the information present in notebooks. We hope to gain a new perspective into program comprehension and debugging in the context of ML testing.","AI Engineering; Computational Notebooks; Data Mining; Image Analysis; Machine Learning Testing; Natural Language Processing; NLP for Code","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-01","","Software Technology","Software Engineering","","",""
"uuid:5e89a0c3-4b16-4131-a8da-adaaf86dabdd","http://resolver.tudelft.nl/uuid:5e89a0c3-4b16-4131-a8da-adaaf86dabdd","Exploring sequential interplay between challenges and regulatory processes in collaborative learning with process mining","Channa, Faisal (University of Jyväskylä; Oulu University); Dindar, Muhterem (Oulu University; Tampere University); Nguyen, Andy (Oulu University); Mishra, R. (TU Delft Science Education and Communication; Oulu University)","","2023","This study explored the sequential interplay between challenges and regulatory processes in high- and low-performing collaborative groups. 66 students from a Finnish higher education institution participated in a collaborative task in groups of three. Approximately 34 h of video data were coded. The sequential analysis revealed that both groups had higher sequential transitions between cognitive regulation and emotional/motivational regulation, rather than cognitive challenges. The high-performing groups demonstrated a stronger sequential link between emotional/motivational regulation and cognitive regulation than the low-performing groups did when faced with cognitive challenges. The study establishes a theoretical grounding and advances our understanding of regulated learning. Since collaborative learning tasks are highly adopted in the higher education context, especially in the Nordic region, this study has practical implications for higher education in these countries and beyond as they seek to develop pedagogical methodologies and customised support to help collaborative groups resolve challenges by initiating regulatory processes.","Challenges; collaborative learning; process mining; regulation; sequential analysis; socially shared regulation of learning","en","journal article","","","","","","","","","","","Science Education and Communication","","",""
"uuid:089074d0-eb9b-4137-8cb0-6fc23f99c2f0","http://resolver.tudelft.nl/uuid:089074d0-eb9b-4137-8cb0-6fc23f99c2f0","Theoretical Framework for A Succinct Empirical Mode Decomposition","Jin, J. (TU Delft Railway Engineering); Li, Z. (TU Delft Railway Engineering)","","2023","Empirical mode decomposition (EMD) lacks a strong theoretical support although extensively applied. We propose a theoretical framework for a succinct EMD in this work, with the assumption of invariant extrema locations for one IMF extraction. We define the envelope mean filter (EMF) and prove that the filter matrix satisfies five properties. The sifting matrix is convergent to an idempotent matrix. An IMF is the projection of the input signal on the generalized eigenspace of the EMF matrix. An IMF is orthogonal to the residual signal, but different IMFs have no orthogonality. With numerical experiments on different signals, our framework achieves similar results to the classic EMD.","adaptive signal processing; Eigenvalues and eigenfunctions; Empirical mode decomposition; Filter banks; Filtering theory; Interpolation; Low-pass filters; Splines (mathematics); time-frequency analysis; Time-frequency analysis; time-varying filters","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-07","","","Railway Engineering","","",""
"uuid:5b3bb562-a961-41d5-b243-7f5077a98728","http://resolver.tudelft.nl/uuid:5b3bb562-a961-41d5-b243-7f5077a98728","Performance Analysis of Phase-Coded FMCW for Joint Sensing and Communication","Kumbul, U. (TU Delft Microwave Sensing, Signals & Systems); Petrov, N. (NXP Semiconductors); Silveira Vaucher, C. (NXP Semiconductors); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","Phase-coded frequency modulated continuous wave (PC-FMCW) radars for joint sensing and communication are considered. The sensing and communication performance of the two signal processing methods, phase lag compensated group delay filter and filter bank receivers, are compared. It is demonstrated that the phase lag compensated group delay receiver provides better sensing performance and requires less computational complexity than the filter bank receiver. The application of the former receiver is, however, limited by the bit error rate degradation with the communication signal bandwidth.","Degradation; Nonlinear distortion; Bit error rate; Filter banks; Receivers; Bandwidth; Radar signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-11","","","Microwave Sensing, Signals & Systems","","",""
"uuid:e0793ab9-18f3-4fd4-83f8-15734ea4eb52","http://resolver.tudelft.nl/uuid:e0793ab9-18f3-4fd4-83f8-15734ea4eb52","Delamination Size Prediction for Compressive Fatigue Loaded Composite Structures Via Ultrasonic Guided Wave Based Structural Health Monitoring","Gul, F.C. (TU Delft Structural Integrity & Composites); Moradi, M. (TU Delft Structural Integrity & Composites); Benedictus, R. (TU Delft Structural Integrity & Composites); HADJRIA, RAFIK (Safran Aircraft Engines); LUGOVTSOVA, YEVGENIYA (BAM Federal Institute for Materials Research and Testing); Zarouchas, D. (TU Delft Structural Integrity & Composites)","","2023","Under in-plane compressive load conditions, the growth of a delamination initially induced by an impact can be followed by a fast growth after a threshold level, which leads to a catastrophic failure in composite structures. To avoid reaching this critical level, it is essential to uncover the delamination size and growth pattern in real time. Ultrasonic Guided Waves (UGW) have a strong capability to interrogate and monitor the structure in real-time and thus track the growth of damage, which may occur during the flight cycles. Although various types of damage affect the monitored UGW signals, it is challenging to determine from the UGW signals what types of damage and at what rate of growth are occurring within the structure. UGW signals can be acquired at defined intervals and then analysed to possibly detect different types of damages, such as delamination, and to quantify the rate of damage growth over fatigue cycles. However, correlating the UGW-based Damage Indicators (DIs) with the specific type of damage, such as delamination, and damage growth is a challenging task as the relation between these DIs and the actual damage state is very complex. Therefore, in this study, a supervised Deep Neural Network-based (DNN) prediction model is proposed aiming to diagnose the delamination size of the composite structure by correlating the UGW-based DIs with the quantified time-varying delamination size. UGW data is collected through a network of permanently installed piezoelectric transducers (PZTs). The delamination size is obtained through ultrasonic C-Scan technique at defined cycles. DIs are extracted in time, frequency, and time-frequency domains and used as the input for the DNN-based regression model. Each sensor-actuator path is considered as an independent set of indicators, which are separated for training, validation, and testing purposes. The effect of the different paths on the delamination size prediction is presented along with the model performance on measured delamination growth in woven type composite sample.","Ultrasonic guided waves; Damage Indicators; piezoelectric transducers; PZT; Deep neural network; Structural health monitoring (SHM); Carbon fiber reinforced polymer (CFRP); Compression after impact; Wavelet transform (WT); Signal Processing","en","conference paper","DEStech publications, Inc.","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-30","","","Structural Integrity & Composites","","",""
"uuid:86f0e258-98fa-42f8-b779-8aceed685f2b","http://resolver.tudelft.nl/uuid:86f0e258-98fa-42f8-b779-8aceed685f2b","Advanced Health Monitoring of Composite Structures Through Deep Learning-Based Analysis of Lamb Wave Data for Developing Health Indicators","Moradi, M. (TU Delft Structural Integrity & Composites); Gul, F.C. (TU Delft Structural Integrity & Composites); Chiachío, Juan (Universidad de Granada; University of Granada); Benedictus, R. (TU Delft Structural Integrity & Composites); Zarouchas, D. (TU Delft Structural Integrity & Composites)","","2023","A health indicator (HI) serves as an intermediary link between structural health monitoring (SHM) data and prognostic models, and an efficient HI should meet prognostic criteria, i.e., monotonicity, trendability, and prognosability. However, designing a proper HI for composite structures is a challenging task due to the complex damage accumulation process during operational conditions. Additionally, designing a HI that is independent of historical SHM data (i.e., from the healthy state until the current state) is even more challenging as HI and remaining useful life prediction are time-dependent phenomena. A reliable SHM technique is required to extract informative time-independent data, and a powerful model is necessary to construct a proper HI from that data. The lamb wave (LW) technique is a useful SHM method that can extract such time-independent data. However, translating the LW data at each time step to the appropriate HI value
is a challenge. AI—deep learning in this case—offers significant mathematical potential to meet this difficulty. A semi-supervised learning approach is developed to train the model using the simulated ideal HIs as the targets. The model uses the current LW data, without prior or subsequent data, to output the current HI value. Prognostic criteria are then calculated using the entire HI trajectory until the end-of-life. To validate the proposed approach, aging experiments from NASA’s prognostics data repository are used, which include composite specimens subjected to a tension-tension fatigue loading and monitored using the LW technique. The LW data is first processed using the Hilbert transform, and then, upper and lower signal envelopes in two states – baseline and current time – are used to feed the deep learning model. The results confirm the effectiveness of the proposed approach according to the prognostic criteria. The effect of different triggering frequencies of the LW system on the results is also discussed in terms of the prognostic criteria.","Prognostics and health management (PHM); Intelligent health indicator; Semi-supervised learning; Tension-Tension fatigue; Composite structures; Signal Processing; machine learning (ML) algorithms; Deep learning (DL); Structural health monitoring (SHM); Guided waves","en","conference paper","DEStech publications, Inc.","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-30","","","Structural Integrity & Composites","","",""
"uuid:0c77360a-a629-466d-b04c-5e6e05f51694","http://resolver.tudelft.nl/uuid:0c77360a-a629-466d-b04c-5e6e05f51694","Predictive Maintenance Planning Using Renewal Reward Processes and Probabilistic RUL Prognostics: Analyzing the Influence of Accuracy and Sharpness of Prognostics","Mitici, M.A. (Universiteit Utrecht); de Pater, I.I. (TU Delft Air Transport & Operations); Zeng, Zhiguo (CentraleSupélec - Paris-Saclay); Barros, Anne (CentraleSupélec - Paris-Saclay)","brito, mario (editor); Aven, Terje (editor); Baraldi, Piero (editor); Cepin, Marko (editor); Zio, Enrico (editor)","2023","We pose the maintenance planning for systems using probabilistic Remaining Useful Life (RUL) prognostics as a renewal reward process. Data-driven probabilistic RUL prognostics are obtained using a Convolutional Neural Network with Monte Carlo dropout. The maintenance planning model is illustrated for aircraft turbofan engines. The results show that in the initial monitoring phase, the accuracy and sharpness of the RUL prognostics is relatively small. The maintenance of the engines is therefore scheduled far in the future. As the usage of the engine increases, the accuracy of the prognostics improves, while the sharpness remains relatively small. As soon as the estimated probability of the RUL is skewed towards 0, the maintenance planning model consistently indicates it is optimal to replace the engines immediately, i.e., ""now"". This shows that probabilistic RUL prognostics support an effective maintenance planning of the engines, despite being imperfect with respect to accuracy and sharpness.","Predictive maintenance planning; Probabilistic RUL; prognostics; Aircraft engines; Renewal processes; Convolutional neural network; Monte Carlo dropout","en","conference paper","Research Publishing","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-01","","","Air Transport & Operations","","",""
"uuid:a0c99e9b-0c85-4707-b2e9-fdbe0d7750c3","http://resolver.tudelft.nl/uuid:a0c99e9b-0c85-4707-b2e9-fdbe0d7750c3","Unrolling of Simplicial ElasticNet for Edge Flow Signal Reconstruction","Liu, Chengen (Student TU Delft); Leus, G.J.T. (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing)","","2023","The edge flow reconstruction task consists of retreiving edge flow signals from corrupted or incomplete measurements. This is typically solved by a regularized optimization problem on higher-order networks such as simplicial complexes and the corresponding regularizers are chosen based on prior knowledge. Tailoring this prior to the setting of interest can be challenging or it may not even be possible. Thus, we consider to learn this prior knowledge via a model-based deep learning approach. We propose a new regularized optimization problem for the simplicial edge flow reconstruction task, the simplicial ElasticNet, which combines the advantages of the 1 and 2 norms. We solve the simplicial ElasticNet problem via the multi-block alternating direction method of multipliers (ADMM) algorithm and provide conditions on its convergence. By unrolling the ADMM iterative steps, we develop a model-based neural network with a low requirement on the number of training data. This unrolling network replaces the fixed parameters in the iterative algorithm by learnable weights, thus exploiting the neural network s learning capability while preserving the iterative algorithm s interpretability. We enhance this unrolling network via simplicial convolutional filters to aggregate information from the edge flow neighbors, ultimately, improving the network learning expressivity. Extensive experiments on real-world and synthetic datasets validate the proposed approaches and show considerable improvements over both baselines and traditional non-model-based neural networks.","Convolution; Information filters; Laplace equations; Neural networks; Noise measurement; Optimization; Signal processing over higher-order networks; simplicial convolutional filters; Task analysis; topological signal processing","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:ee18af1f-40d2-4df0-91b5-185fa338fb4b","http://resolver.tudelft.nl/uuid:ee18af1f-40d2-4df0-91b5-185fa338fb4b","Formal Abstraction of General Stochastic Systems via Noise Partitioning","Skovbekk, John (University of Colorado); Laurenti, L. (TU Delft Team Luca Laurenti); Frew, Eric (University of Colorado); Lahijanian, Morteza (University of Colorado)","","2023","Verifying the performance of safety-critical, stochastic systems with complex noise distributions is difficult. We introduce a general procedure for the finite abstraction of nonlinear stochastic systems with nonstandard (e.g., non-affine, non-symmetric, non-unimodal) noise distributions for verification purposes. The method uses a finite partitioning of the noise domain to construct an interval Markov chain (IMC) abstraction of the system via transition probability intervals. Noise partitioning allows for a general class of distributions and structures, including multiplicative and mixture models, and admits both known and data-driven systems. The partitions required for optimal transition bounds are specified for systems that are monotonic with respect to the noise, and explicit partitions are provided for affine and multiplicative structures. By the soundness of the abstraction procedure, verification on the IMC provides guarantees on the stochastic system against a temporal logic specification. In addition, we present a novel refinement-free algorithm that improves the verification results. Case studies on linear and nonlinear systems with non-Gaussian noise, including a data-driven example, demonstrate the generality and effectiveness of the method without introducing excessive conservatism.","Autonomous systems; Kernel; Markov processes; Nonlinear systems; Probabilistic logic; Standards; Stochastic systems; stochastic systems; Uncertainty","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-07","","","Team Luca Laurenti","","",""
"uuid:541ede3f-c61e-4bd7-9403-99d828ff024d","http://resolver.tudelft.nl/uuid:541ede3f-c61e-4bd7-9403-99d828ff024d","Architects’ Methodology in Adaptive Reuse of Heritage Buildings","Arfa, F. (TU Delft Heritage & Architecture); Quist, W.J. (TU Delft Heritage & Architecture); Lubelli, B. (TU Delft Heritage & Architecture); Zijlstra, H. (TU Delft Heritage & Architecture)","Augustiniok, Nadin (editor)","2023","Adaptive reuse (AR) of heritage buildings is common practice in The Netherlands and is becoming more and more common at the International level. While AR projects are generally considered positive actions towards preserving the qualities of heritage buildings, not all projects have similar (positive) impact. To propose a methodology for dealing with the AR of heritage buildings aiming for positive impact, the AR process has been studied. After a comprehensive systematic literature review, a theoretical model representing the steps of the AR process has been sketched (fig.1). This model depicts the ideal steps of architects in dealing with heritage buildings. To check whether these steps are actually followed, two effective AR projects, winners of the NRP golden phoenix prize, were studied namely ‘LocHal’ in Tilburg (fig.2) and ‘Energiehuis’ in Dordrecht (fig.3). During the research, the cases were visited, the published literature on the cases was reviewed, and architects and other stakeholders involved were interviewed. The interviews were based on the steps of the theoretical model but the model was not shown to the architects till the end of the interview. Finally feedback was asked from the architects if the model represented their actual steps and where they diverged. The analysis of the collected data confirmed that architects followed the steps reported in figure 1. However, the process was reported to be not linear, as suggested in the model, but to include several loops between some of the steps, mainly between steps 1, 2 , 3, 4, 5, and 6 (fig.4). Next to refining of the AR model, investigation of the case studies led to the identification of tools and methods used by architects, which have possibly positively influenced the effectiveness (positive impact) of the final result. Both case studies distinguished themselves, according to the NRP jury reports, because of their high ‘social value creation’. Involving end-users, other producers, and original users in different steps of the AR process has been identified as the main method used by the architects contributing to this positive impact.","heritage buildings; adaptive reuse; methodology; dordrecht; ar process; theoretical model","en","conference paper","Hasselt University","","","","","The organization of this international colloquium was made possible through the generous fi nancial support of the DIOS Incentive Fund (UHasselt), the Doctoral School of Behavioral Sciences and Humanities (UHasselt), and the Research Foundation Flanders (FWO), as well as the invaluable practical assistance provided by the Faculty of Architecture and Arts of UHasselt and the Flanders Architecture Institute. Our heartfelt appreciation goes out to all our esteemed colleagues whose dedicated efforts contributed to the seamless execution of this event. This colloquium is organized as an extension of the exhibition As Found: Experiments in Preservation by the Flanders Architecture Institute. Curated by Sofi e De Caigny, Hulya Ertas and Bie Plevoets, the exhibition is on show at De Singel, Antwerp, from 6 September 2023 to 17 March 2024. The exhibition is accompanied by a catalogue, available in English (ISBN: 9789492567321) and Dutch (ISBN: 9789492567338).","","","","","Heritage & Architecture","","",""
"uuid:59fb0859-1345-440e-9c9b-edb538f86d9b","http://resolver.tudelft.nl/uuid:59fb0859-1345-440e-9c9b-edb538f86d9b","One and Many Details: Considering the Contingencies of Building as Empirical Evidence for Architectural Pedagogy","Crevels, Eric (TU Delft Situated Architecture); Mejia Hernandez, J.A. (TU Delft Situated Architecture)","","2023","The study of built objects has always played a key role in the education of the architect. At the earliest stages of training most of us sat in front of buildings and drew them, trying to capture their overall features and minute details. What appears simple is, in fact, an extremely meaningful exercise. It presumes that drawing an existing object allows us to understand what decisions were made in its conception, granted that evidence of those decisions is actually there, congealed as empirical evidence and available for further use.
As students advance in their studies, this close attention to objects and the decisions that define them gives way to more complex reflections. Final year students seldom sit in front of buildings and draw them. Their fascination with societal issues and formal innovation seems to leave little room to ponder on the apparently simple ways in which materials come together. Likewise, interest in the built as a source of knowledge appears to wane among faculty who inclined towards fashionable forms of scholarship outsource technological research and education to engineers and other pragmatists.
While architectural education’s turn towards the humanities offers new and exciting possibilities, the relegation of the built to a mere problem-solving role is not without its consequences. Among them, perhaps the most unfortunate outcome of assuming construction as applied, externally produced knowledge, is that it robs us of rare and precious insight that is ingrained in the built.
Looking for that insight, we will describe how a design studio can use construction as a means for students to produce and develop their own architectural knowledge. Our description will be favored by an outline of the supporting theory, the epistemology we used to operate it, and the methodology employed to teach the course.
Throughout a ten-week period, we accompanied a group of sixteen master’s students in their process of exploration, evaluation and discovery of four details from existing buildings. Our goal, and the challenge we presented to the group, was to obtain from these details a theory and a new design.","collective tacit knowledge; embedded knowledge; architecture; craft; design process; material culture; pedagogy","en","report","TACK Publishing Platform","","","","","","","","","","Situated Architecture","","",""
"uuid:af452225-2403-4183-ad73-3b9727641685","http://resolver.tudelft.nl/uuid:af452225-2403-4183-ad73-3b9727641685","Process systems engineering perspectives on eco-efficient downstream processing of volatile biochemicals from fermentation","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","Increasing concerns over environmental pollution, climate change and energy security are driving a necessary transition from fossil carbon sources to more sustainable alternatives. Due to lower environmental impact, biochemicals are rapidly gaining significance as a potential renewable solution, particularly of interest in Europe. In this context, process systems engineering (PSE) helps with the decision-making at multiple scales and levels, aiming for optimum use of (renewable) resources. Fermentation using waste biomass or industrial off-gases is a promising way for the production of these products. However, due to the inhibitory effects or low substrate concentrations, relatively low product concentrations can be obtained. Consequently, significant improvements in downstream processing are needed to increase the competitiveness of the overall bioprocesses. This paper supports sustainable development by providing new PSE perspectives on the purification of volatile bioproducts from dilute fermentation broths. Since purification significantly contributes to the total cost of biochemical production processes (20%–40% of the total cost), enhancing this part may substantially improve the competitiveness of the overall bioprocesses. The highly advanced downstream process offers the possibility of recovering high-purity products while enhancing the fermentation step by continuously removing inhibitory products, and recycling microorganisms with most of the present water. Besides higher productivity, the upstream process can be greatly improved by avoiding loss of biomass, enabling closed-loop operation and decreasing the need for fresh water. Applying heat pumping, heat integration and other methods of process intensification (PI) can drastically reduce energy requirements and CO2 emissions. Additionally, the opportunity to use renewable electricity instead of conventional fossil energy presents a significant step toward (green) electrification and decarbonization of the chemical industry.","biochemical production; downstream processing; distillation; heat pumps; heat integration; process intensification; electrification","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:cbad79df-846c-4153-a6eb-8cdacfc73b39","http://resolver.tudelft.nl/uuid:cbad79df-846c-4153-a6eb-8cdacfc73b39","Micromechanical Models for FDM 3D-Printed Polymers: A Review","Bol, R.J.M. (TU Delft Materials and Environment); Šavija, B. (TU Delft Materials and Environment)","","2023","Due to its large number of advantages compared to traditional subtractive manufacturing techniques, additive manufacturing (AM) has gained increasing attention and popularity. Among the most common AM techniques is fused filament fabrication (FFF), usually referred to by its trademarked name: fused deposition modeling (FDM). This is the most efficient technique for manufacturing physical three-dimensional thermoplastics, such that FDM machines are nowadays the most common. Regardless of the 3D-printing methodology, AM techniques involve layer-by-layer deposition. Generally, this layer-wise process introduces anisotropy into the produced parts. The manufacturing procedure creates parts possessing heterogeneities at the micro (usually up to 1 mm) and meso (mm to cm) length scales, such as voids and pores, whose size, shape, and spatial distribution are mainly influenced by the so-called printing process parameters. Therefore, it is crucial to investigate their influence on the mechanical properties of FDM 3D-printed parts. This review starts with the identification of the printing process parameters that are considered to affect the micromechanical composition of FDM 3D-printed polymers. In what follows, their (negative) influence is attributed to characteristic mechanical properties. The remainder of this work reviews the state of the art in geometrical, numerical, and experimental analyses of FDM-printed parts. Finally, conclusions are drawn for each of the aforementioned analyses in view of microstructural modeling.","fused deposition modeling (FDM); additive manufacturing (AM); printing process parameters; mechanical anisotropy; inter-layer bond; intra-layer bond","en","review","","","","","","","","","","","Materials and Environment","","",""
"uuid:9e981b1f-dc14-4ba8-b02d-8a0ee3bae67f","http://resolver.tudelft.nl/uuid:9e981b1f-dc14-4ba8-b02d-8a0ee3bae67f","A Comprehensive Review on Electrocatalytic Applications of 2D Metallenes","Basyooni, Mohamed A. (TU Delft Dynamics of Micro and Nano Systems; Selçuk University)","","2023","This review introduces metallenes, a cutting-edge form of atomically thin two-dimensional (2D) metals, gaining attention in energy and catalysis. Their unique physicochemical and electronic properties make them promising for applications like catalysis. Metallenes stand out due to their abundance of under-coordinated metal atoms, enhancing the catalytic potential by improving atomic utilization and intrinsic activity. This review explores the utility of 2D metals as electrocatalysts in sustainable energy conversion, focusing on the Oxygen Evolution Reaction, Oxygen Reduction Reaction, Fuel Oxidation Reaction, and Carbon Dioxide Reduction Reaction. Aimed at researchers in nanomaterials and energy, the review is a comprehensive resource for unlocking the potential of 2D metals in creating a sustainable energy landscape.","2D metals; metallenes; electrocatalysts; atomically thin structure; electrochemical processes","en","review","","","","","","","","","","","Dynamics of Micro and Nano Systems","","",""
"uuid:57e4b903-3dce-4eb8-9108-be1ef7e0848b","http://resolver.tudelft.nl/uuid:57e4b903-3dce-4eb8-9108-be1ef7e0848b","A Fast Geometric Multigrid Method for Curved Surfaces","Wiersma, R.T. (TU Delft Computer Graphics and Visualisation); Nasikun, A. (TU Delft Computer Graphics and Visualisation; Universitas Gadjah Mada); Eisemann, E. (TU Delft Computer Graphics and Visualisation); Hildebrandt, K.A. (TU Delft Computer Graphics and Visualisation)","Spencer, Stephen N. (editor)","2023","We introduce a geometric multigrid method for solving linear systems arising from variational problems on surfaces in geometry processing, Gravo MG. Our scheme uses point clouds as a reduced representation of the levels of the multigrid hierarchy to achieve a fast hierarchy construction and to extend the applicability of the method from triangle meshes to other surface representations like point clouds, nonmanifold meshes, and polygonal meshes. To build the prolongation operators, we associate each point of the hierarchy to a triangle constructed from points in the next coarser level. We obtain well-shaped candidate triangles by computing graph Voronoi diagrams centered around the coarse points and determining neighboring Voronoi cells. Our selection of triangles ensures that the connections of each point to points at adjacent coarser and finer levels are balanced in the tangential directions. As a result, we obtain sparse prolongation matrices with three entries per row and fast convergence of the solver. Code is available at https://graphics.tudelft.nl/gravo_mg.","geometry processing; multigrid methods; Poisson problems; geometric multigrid; Laplace matrix","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:77d54959-e4e2-4c51-9394-e913376f2313","http://resolver.tudelft.nl/uuid:77d54959-e4e2-4c51-9394-e913376f2313","Sensor Selection for Angle of Arrival Estimation Based on the Two-Target Cramér-Rao Bound","Kokke, C.A. (TU Delft Signal Processing Systems); Coutino, Mario (TNO); Anitori, Laura (TNO); Heusdens, R. (Netherlands Defence Academy); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2023","Sensor selection is a useful method to help reduce data throughput, as well as computational, power, and hardware requirements, while still maintaining acceptable performance. Although minimizing the Cramér-Rao bound has been adopted previously for sparse sensing, it did not consider multiple targets and unknown source models. In this work, we propose to tackle the sensor selection problem for angle of arrival estimation using the worst-case Cramér-Rao bound of two uncorrelated sources. To do so, we cast the problem as a convex semi-definite program and retrieve the binary selection by randomized rounding. Through numerical examples related to a linear array, we illustrate the proposed method and show that it leads to the natural selection of elements at the edges plus the center of the linear array. This contrasts with the typical solutions obtained from minimizing the single-target Cramér-Rao bound.","sparse sensing; cramér-rao bound; multi-target estimation; array processing; sensor selection","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-11-05","","","Signal Processing Systems","","",""
"uuid:1ac1e921-49ca-4c78-9c2e-85d8f35b5d6b","http://resolver.tudelft.nl/uuid:1ac1e921-49ca-4c78-9c2e-85d8f35b5d6b","Distributed Gaussian Process Hyperparameter Optimization for Multi-Agent Systems","Zhai, P. (TU Delft Signal Processing Systems); Rajan, R.T. (TU Delft Signal Processing Systems)","","2023","Gaussian Process (GP) is a flexible non-parametric method which has a wide variety of applications e.g., field estimation using multi-agent systems. However, the training of the hyperparameters suffers from high computational complexity. Recently, distributed hyperparameter optimization with proximal gradients has been proposed to reduce complexity, however only for a network with a central station. In this work, exploiting edge-based constraints, we propose two fully-distributed algorithms pxADMMfd and pxADMMfd,fast for a network of multi-agent systems, which do not rely on a central station. In addition, asynchronous versions of the algorithms are also proposed to reduce the synchronization overhead in heterogeneous networks. Simulations are conducted for a field estimation problem, using both artificial, and real-world datasets, which show that the proposed fully-distributed algorithms successfully converge, at the cost of an increased number of iterations.","Gaussian Process; Multi-agent Systems; ADMM; Field estimation","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-11-05","","","Signal Processing Systems","","",""
"uuid:616f2ec2-20c0-475c-b7eb-1d3ba7d76580","http://resolver.tudelft.nl/uuid:616f2ec2-20c0-475c-b7eb-1d3ba7d76580","Diagnosing and Addressing Emergent Harms in the Design Process of Public AI and Algorithmic Systems","Nouws, S.J.J. (TU Delft Information and Communication Technology); Martinez de Rituerto de Troya, I. (TU Delft Information and Communication Technology); Dobbe, R.I.J. (TU Delft Information and Communication Technology); Janssen, M.F.W.H.A. (TU Delft Engineering, Systems and Services)","Cid, David Duenas (editor)","2023","Algorithmic and data-driven systems are increasingly used in the public sector to improve the efficiency of existing services or to provide new services through the newfound capacity to process vast volumes of data. Unfortunately, certain instances also have negative consequences for citizens, in the form of discriminatory outcomes, arbitrary decisions, lack of recourse, and more. These have serious impacts on citizens ranging from material to psychological harms. These harms partly emerge from choices and interactions in the design process. Existing critical and reflective frameworks for technology design do not address several aspects that are important to the design of systems in the public sector, namely protection of citizens in the face of potential algorithmic harms, the design of institutions to ensure system safety, and an understanding of how power relations affect the design, development, and deployment of these systems. The goal of this workshop is to develop these three perspectives and take the next step towards reflective design processes within public organisations. The workshop will be divided into two parts. In the first half we will elaborate the conceptual foundations of these perspectives in a series of short talks. Workshop participants will learn new ways of protecting against algorithmic harms in sociotechnical systems through understanding what institutions can support system safety, and how power relations influence the design process. In the second half, participants will get a chance to apply these lenses by analysing a real world case, and reflect on the challenges in applying conceptual frameworks to practice.","Artificial Intelligence; data science; design process; institutional design; power analysis; public sector; system safety","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","Engineering, Systems and Services","Information and Communication Technology","","",""
"uuid:37b1d8f9-2ef4-4e22-be7e-1a7c0e553d7c","http://resolver.tudelft.nl/uuid:37b1d8f9-2ef4-4e22-be7e-1a7c0e553d7c","Intraprocedural assessment of ablation margins using computed tomography co-registration in hepatocellular carcinoma treatment with percutaneous ablation: IAMCOMPLETE study","Hendriks, P. (Leiden University Medical Center); van Dijk, Kiki M. (Leiden University Medical Center); Boekestijn, Bas (Leiden University Medical Center); Broersen, Alexander (Leiden University Medical Center); van Duijn-de Vreugd, Jacoba J. (Leiden University Medical Center); Coenraad, Minneke J. (Leiden University Medical Center); Dijkstra, J. (Leiden University Medical Center); de Geus-Oei, L.F. (TU Delft RST/Radiation, Science and Technology; Leiden University Medical Center; University of Twente); Burgmans, M.C. (Leiden University Medical Center)","","2023","Purpose: The primary objective of this study was to determine the feasibility of ablation margin quantification using a standardized scanning protocol during thermal ablation (TA) of hepatocellular carcinoma (HCC), and a rigid registration algorithm. Secondary objectives were to determine the inter- and intra-observer variability of tumor segmentation and quantification of the minimal ablation margin (MAM). Materials and methods: Twenty patients who underwent thermal ablation for HCC were included. There were thirteen men and seven women with a mean age of 67.1 ± 10.8 (standard deviation [SD]) years (age range: 49.1–81.1 years). All patients underwent contrast-enhanced computed tomography examination under general anesthesia directly before and after TA, with preoxygenated breath hold. Contrast-enhanced computed tomography examinations were analyzed by radiologists using rigid registration software. Registration was deemed feasible when accurate rigid co-registration could be obtained. Inter- and intra-observer rates of tumor segmentation and MAM quantification were calculated. MAM values were correlated with local tumor progression (LTP) after one year of follow-up. Results: Co-registration of pre- and post-ablation images was feasible in 16 out of 20 patients (80%) and 26 out of 31 tumors (84%). Mean Dice similarity coefficient for inter- and intra-observer variability of tumor segmentation were 0.815 and 0.830, respectively. Mean MAM was 0.63 ± 3.589 (SD) mm (range: -6.26–6.65 mm). LTP occurred in four out of 20 patients (20%). The mean MAM value for patients who developed LTP was -4.00 mm, as compared to 0.727 mm for patients who did not develop LTP. Conclusion: Ablation margin quantification is feasible using a standardized contrast-enhanced computed tomography protocol. Interpretation of MAM was hampered by the occurrence of tissue shrinkage during TA. Further validation in a larger cohort should lead to meaningful cut-off values for technical success of TA.","Ablation margin; Computed tomography; Hepatocellular carcinoma; Image processing; Thermal ablation","en","journal article","","","","","","","","","","RST/Radiation, Science and Technology","","","",""
"uuid:891dd56e-3efd-4127-a7ed-618c1c72baa4","http://resolver.tudelft.nl/uuid:891dd56e-3efd-4127-a7ed-618c1c72baa4","Focal deblending: Marine data processing experiences","Kontakis, A. (TU Delft ImPhys/Verschuur group); Verschuur, D.J. (TU Delft Applied Geophysics and Petrophysics; TU Delft ImPhys/Verschuur group)","","2023","In contrast to conventional acquisition practices, simultaneous source acquisition allows for overlapping wavefields to be recorded. Relaxing the shot schedule in this manner has certain advantages, such as allowing for faster acquisition and/or denser shot sampling. This flexibility usually comes at the cost of an extra step in the processing workflow, where the wavefields are deblended, that is, separated. An inversion-type algorithm for deblending, based on the focal transform, is investigated. The focal transform uses an approximate velocity model to focus seismic data. The combination of focusing with sparsity constraints is used to suppress blending noise in the deblended wavefield. The focal transform can be defined in different ways to better match the spatial sampling of different types of marine surveys. To avoid solving a large inverse problem, involving a large part of the survey simultaneously, the input data can be split into sub-sets that are processed independently. We discuss the formation of such sub-sets for ocean bottom node and streamer-type acquisitions. Two deblending experiments are then carried out. The first is on numerically blended ocean bottom node field data. The second is on field-blended towed streamer data with a challenging signal overlap. The latter experiment is repeated using curvelet-based deblending for comparison purposes, showing the virtues of the focal deblending process. Several challenges of basing deblending around the focal transform are discussed as well as some suggestions for improved implementations.","data processing; noise; seismic acquisition; signal processing","en","journal article","","","","","","","","","","","ImPhys/Verschuur group","","",""
"uuid:87c9c52c-5781-4da3-9cd4-8134ce54362b","http://resolver.tudelft.nl/uuid:87c9c52c-5781-4da3-9cd4-8134ce54362b","A General Hierarchical Control System to Model ACC Systems: An Empirical Study","Ruan, Tiancheng (Southeast University); Wang, Hao (Southeast University); Jiang, Rui (Beijing Jiaotong University); Li, Xiaopeng (University of Wisconsin-Madison); Xie, N. (TU Delft Electronic Instrumentation); Xie, Xinjian (Guangzhou Baiyun International Airport Co); Hao, Ruru (Chang'an University); Dong, Changyin (Southeast University)","","2023","Urged by a close future perspective of a traffic flow made of a mix of human-driven vehicles and automated vehicles (AVs), research has recently focused on studying the traffic flow characteristics of Adaptive Cruise Controls (ACCs), the most typical AV. However, in most works, the ACC system is studied under a simplifying and unrealistic assumption, or the ACC system modeled is inaccurate. This paper proposes a general hierarchical control system to model ACC systems with several assumptions based on the deficiencies above. Moreover, a field experiment was conducted, and the corresponding experimental data was used to verify the proposed hierarchical control system and assumptions. In addition, string stability is explored along with sensitivity analyses of control parameters based on an example under the constant time gap policy. The results show that different upper-level controller parameters have different delays, where the delay of the speed is negligible; the introduction of actuator delay and lag in the lower-level controller can significantly improve the model goodness of fit. Furthermore, optimizing the delay and lag in the lower-level controller can significantly enhance the string stability of ACCs than optimizing the control parameters.","Actuators; Adaptive cruise control; Control systems; Data models; Delays; field experiments; hierarchical control system; Mathematical models; Process control; Stability criteria; string stability","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-02-25","","","Electronic Instrumentation","","",""
"uuid:b274dfd4-9a7a-4754-b301-236acbce9179","http://resolver.tudelft.nl/uuid:b274dfd4-9a7a-4754-b301-236acbce9179","Cyclical Variational Bayes Monte Carlo for efficient multi-modal posterior distributions evaluation","Igea, Felipe (University of Oxford); Cicirello, A. (TU Delft Mechanics and Physics of Structures; University of Oxford)","","2023","Multi-modal distributions of some physics-based model parameters are often encountered in engineering due to different situations such as a change in some environmental conditions, and the presence of some types of damage and non-linearity. In statistical model updating, for locally identifiable parameters, it can be anticipated that multi-modal posterior distributions would be found. The full characterization of these multi-modal distributions is important as methodologies for structural condition monitoring in structures are frequently based in the comparison of the damaged and healthy models of the structure. The characterization of posterior multi-modal distributions using state-of-the-art sampling techniques would require a large number of simulations of expensive-to-run physics-based models. Therefore, when a limited number of simulations can be run, as it often occurs in engineering, the traditional sampling techniques would not be able to capture accurately the multi-modal distributions. This could potentially lead to large numerical errors when assessing the performance of an engineering structure under uncertainty. Therefore, an approach is proposed for drastically reducing the number of models runs while yielding accurate estimates of highly multi-modal posterior distributions. This approach introduces a cyclical annealing schedule into the Variational Bayes Monte Carlo (VBMC) method to improve the algorithm's phase of exploration and the finding of high probability areas in the multi-modal posteriors throughout the different cycles. Three numerical and one experimental investigations are used to compare the proposed cyclical VBMC with the standard VBMC algorithm, the monotonic VBMC and the Transitional Ensemble Markov Chain Monte Carlo (TEMCMC). It is shown that the standard VBMC fails in capturing multi-modal posteriors as it is unable to escape already found regions of high posterior density. In the presence of highly multi-modal posteriors, the proposed cyclical VBMC algorithm outperforms all the other approaches in terms of accuracy of the resulting posterior, and number of model runs required.","Bayesian inference; Bayesian quadrature; Cyclical annealing; Gaussian process; Model updating; Variational inference","en","journal article","","","","","","","","","","","Mechanics and Physics of Structures","","",""
"uuid:a9727e30-4cb6-465f-bb4e-5dafc42b83b2","http://resolver.tudelft.nl/uuid:a9727e30-4cb6-465f-bb4e-5dafc42b83b2","Sticky PDMP samplers for sparse and local inference problems","Bierkens, G.N.J.C. (TU Delft Statistics); Grazzi, S. (TU Delft Statistics; University of Warwick); van der Meulen, F.H. (TU Delft Statistics; Vrije Universiteit Amsterdam); Schauer, M.R. (TU Delft Statistics; Chalmers University of Technology; University of Gothenburg)","","2023","We construct a new class of efficient Monte Carlo methods based on continuous-time piecewise deterministic Markov processes (PDMPs) suitable for inference in high dimensional sparse models, i.e. models for which there is prior knowledge that many coordinates are likely to be exactly 0. This is achieved with the fairly simple idea of endowing existing PDMP samplers with “sticky” coordinate axes, coordinate planes etc. Upon hitting those subspaces, an event is triggered during which the process sticks to the subspace, this way spending some time in a sub-model. This results in non-reversible jumps between different (sub-)models. While we show that PDMP samplers in general can be made sticky, we mainly focus on the Zig-Zag sampler. Compared to the Gibbs sampler for variable selection, we heuristically derive favourable dependence of the Sticky Zig-Zag sampler on dimension and data size. The computational efficiency of the Sticky Zig-Zag sampler is further established through numerical experiments where both the sample size and the dimension of the parameter space are large.","Bayesian variable selection; Big-data; High-dimensional problems; Monte Carlo; Non-reversible jump; Piecewise deterministic Markov process; Spike-and-slab","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:c1f58038-1caf-406f-b953-8b829e7a920f","http://resolver.tudelft.nl/uuid:c1f58038-1caf-406f-b953-8b829e7a920f","A robust computational framework for simulating the dynamics of large assemblies of highly-flexible fibers immersed in viscous flow","Koshakji, Anwar (Massachusetts Institute of Technology); Chomette, Grégoire (Massachusetts Institute of Technology); Turner, Jeffrey (US Army DEVCOM Armaments Center); Jablonski, Jonathan (US Army DEVCOM Armaments Center); Haynes, Aisha (US Army DEVCOM Armaments Center); Carlucci, Donald (US Army DEVCOM Armaments Center); Giovanardi, Bianca (TU Delft Aerospace Structures & Computational Mechanics; Massachusetts Institute of Technology); Radovitzky, Raúl A. (Massachusetts Institute of Technology)","","2023","The dynamic response of flexible filaments immersed in viscous fluids is important in cell mechanics, as well as other biological and industrial processes. In this paper, we propose a parallel computational framework to simulate the fluid-structure interactions in large assemblies of highly-flexible filaments immersed in a viscous fluid. We model the deformation of each filament in 3D with a C1 geometrically-exact large-deformation finite-element beam formulation and we describe the hydrodynamic interactions by a boundary element discretization of the Stokeslet model. We incorporate a contact algorithm that prevents fiber interpenetration and avoids previously reported numerical instabilities in the flow, thus providing the ability to describe the complex evolution of large clouds of fibers over long time spans. In order to support the required long-term integration, we use implicit integration of the solid-fluid-contact coupling. We address the challenges associated with the solution of the large and dense linear system for the hydrodynamic interactions by taking advantage of the massive parallelization offered by Graphic Processing Units (GPUs), which we test up to 1000 fibers and 45000 degrees of freedom. We validate the framework against the well-established response of the sedimentation of a single fiber under gravity in the low to moderate flexibility range. We then reproduce previous results and provide additional insights in the large to extreme flexibility range. Finally, we apply the framework to the analysis of the sedimentation of large clouds of filaments under gravity, as a function of fiber flexibility. Owing to the long time spans afforded by our computational framework, our simulations reproduce the breakup response observed experimentally in the lower flexibility range and provide new insights into the breakup of the initial clouds in the higher flexibility range.","Beam contact; Boundary element method; Flexible filaments in Stokes flow; Fluid-structure interaction; Graphic processing units (GPU); Large-deformation beam elements","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-05-29","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:f86e9626-1405-4b0e-bde7-15de1b3f9f5b","http://resolver.tudelft.nl/uuid:f86e9626-1405-4b0e-bde7-15de1b3f9f5b","Gaussian Process Repetitive Control With Application to an Industrial Substrate Carrier System With Spatial Disturbances","Mooren, Noud (Eindhoven University of Technology); Witvoet, Gert (Eindhoven University of Technology; TNO); Oomen, T.A.E. (TU Delft Team Jan-Willem van Wingerden; Eindhoven University of Technology)","","2023","Repetitive control (RC) can perfectly attenuate disturbances that are periodic in the time domain. The aim of this article is to develop an RC approach that compensates for disturbances that are time-domain nonperiodic but are repeating in the position domain. The developed position-domain buffer consists of a Gaussian process (GP), which is learned using appropriate dynamic filters and nonequidistant data. This approach estimates position-domain disturbances resulting in perfect compensation. The method is successfully applied to a substrate carrier system, demonstrating performance robustness against time-domain nonperiodic disturbances that are amplified by traditional RC.","Gaussian processes (GPs); repetitive control (RC); spatial disturbances","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-02-02","","","Team Jan-Willem van Wingerden","","",""
"uuid:0f5bab7b-557c-4cb6-9028-e8d9b68aeee8","http://resolver.tudelft.nl/uuid:0f5bab7b-557c-4cb6-9028-e8d9b68aeee8","Air–water properties of unsteady breaking bore part 2: Void fraction and bubble statistics","Shi, Rui (University of Queensland); Wüthrich, D. (TU Delft Hydraulic Structures and Flood Risk; University of Queensland); Chanson, Hubert (University of Queensland)","","2023","Continuing from the part 1 (Shi et al., 2022) this paper presents an experimental investigation of transient void fraction and bubble statistics in a highly turbulent breaking bore with Fr1=2.4. The measurements were conducted using a combination of dual-tip phase-detection probes and an ultra-high-speed video camera. The enclosed bubble detection technique (EBDT) used the synchronised probe and camera signals to provide the contour of instantaneous void fraction in the bore roller. The ensemble-averaged void fraction was derived, and compared to analytical solutions of air diffusion models. The bubble statistics were characterised by the bubble clustering properties, pseudo bubble count rate and bubble size spectrum. The clustering data showed the non-random bubble grouping in the shear layer, and the bubble size distributions N(r) followed a commonly adopted bubble break-up model: N(r)∝r−m, where r was the equivalent bubble radius in the present study. The comparison indicated that, in the breaking bore, its air diffusion process was similar to that in a stationary hydraulic jump, and the bubble break-up process was comparable to that in breaking waves.","Breaking bore; Bubble clustering; Bubble size spectrum; Dual-tip phase detection probe; Image processing; Unsteady gas–liquid flow; Void fraction","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-06-11","","","Hydraulic Structures and Flood Risk","","",""
"uuid:4cf4c51a-6995-4fb3-bb07-2803625aa802","http://resolver.tudelft.nl/uuid:4cf4c51a-6995-4fb3-bb07-2803625aa802","Self-assembly of ammonium assimilation microbiomes regulated by COD/N ratio","Han, Fei (Shandong University); Zhang, Mengru (Shandong University); Li, Zhe (Shandong University); Liu, Zhe (School of Environmental Science and Engineering; Shandong University); Han, Yufei (Shandong University); Li, L. (TU Delft Sanitary Engineering); Zhou, Weizhi (Shandong University)","","2023","Marine microorganisms have an inherent advantage in the treatment of saline wastewater due to their halophilic properties. Ammonium assimilation is the most important and common nitrogen conversion pathway in the ocean, which means that it may be a suitable nitrogen removal strategy under high salinity conditions. However, the targeted construction of engineering microbiomes with ammonium assimilation function for nitrogen recovery has not been realized. Here, we constructed four halophilic ammonium assimilation microbiomes from marine microbial community under varying chemical oxygen demand (COD) to nitrogen (COD/N) ratios. The regulation of COD/N ratio on microbial self-assembly was explored at the phenotypic, genetic, and microbial levels. The results of nitrogen balance tests, functional genes abundance and microbial community structure confirmed that the microbiomes regulated by different COD/N ratios all performed obligate ammonium assimilation functions. >93% of ammonium, 90% of TN, 98% of COD, and 82% of phosphorus were simultaneously removed by microbial assimilation under the COD/N ratio of 20. COD/N ratios significantly affected the self-assembly of microbiomes by selectively enriching heterotrophic microorganisms with different preference for organic carbon load. Additionally, the increase of COD/N ratio intensified the competition among species within the microbiome (the proportion of negative connections of microbial network increased from 5.0% to 24.4%), which may enhance the stability of community structure. Taken together, these findings can provide theoretical guidance for the construction and optimization of engineering microbiomes for synergistic nitrogen removal and recovery.","Ammonium assimilation microbiomes; Chemical oxygen demand to nitrogen (COD/N) ratios; Marine microbial community; Microbial network; Self-assembly process","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Sanitary Engineering","","",""
"uuid:4d016f2c-e3a5-48d0-ae3b-96089d03b02f","http://resolver.tudelft.nl/uuid:4d016f2c-e3a5-48d0-ae3b-96089d03b02f","Resilience assessment and management: A review on contributions on process safety and environmental protection","Chen, Chao (Central South University China); Li, Jie (Chinese Academy of Sciences); Zhao, Yixin (Norwegian University of Science and Technology (NTNU)); Goerlandt, Floris (Dalhousie University); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven); Yiliu, Liu (Norwegian University of Science and Technology (NTNU))","","2023","Resilience assessment and management of technical systems have been increasingly important as the current applications in the process industries are becoming more complex. Several review papers on resilience management methods and applications have been published by researchers from different aspects. However, none of them put the focus on bibliometric analysis of the relevant research works especially those in the process industries. This study pays attention to system resilience assessment and management, by reviewing sources of relevant publications, collaboration of institutions and authors, and development trends. In addition, the development of resilience engineering and management is further investigated through analyzing the most influential and relevant journals of process safety and environmental protection. This review provides valuable information regarding knowledge structure, evolution and influential publications, and high-level insights for future research.","Bibliometric analysis; Data visualization; Process safety and environmental protection; Resilience assessment; Resilience management","en","review","","","","","","","","","","","Safety and Security Science","","",""
"uuid:c9b07a1f-57c3-47eb-84c0-81e51907f50a","http://resolver.tudelft.nl/uuid:c9b07a1f-57c3-47eb-84c0-81e51907f50a","Fabrication sequence optimization for minimizing distortion in multi-axis additive manufacturing","Wang, W. (Dalian University of Technology); van Keulen, A. (TU Delft Mechanical, Maritime and Materials Engineering); Wu, J. (TU Delft Materials and Manufacturing)","","2023","Additive manufacturing of metal parts involves phase transformations and high temperature gradients which lead to uneven thermal expansion and contraction, and, consequently, distortion of the fabricated components. The distortion has a great influence on the structural performance and dimensional accuracy, e.g., for assembly. It is therefore of critical importance to model, predict and, ultimately, reduce distortion. In this paper, we present a computational framework for fabrication sequence optimization to minimize distortion in multi-axis additive manufacturing (e.g., robotic wire arc additive manufacturing), in which the fabrication sequence is not limited to planar layers only. We encode the fabrication sequence by a continuous pseudo-time field, and optimize it using gradient-based numerical optimization. To demonstrate this framework, we adopt a computationally tractable yet reasonably accurate model to mimic the material shrinkage in metal additive manufacturing and thus to predict the distortion of the fabricated components. Numerical studies show that optimized curved layers can reduce distortion by orders of magnitude as compared to their planar counterparts.","Fabrication sequence; Multi-axis additive manufacturing; Process planning; Thermal distortion; Topology optimization; Wire arc additive manufacturing","en","journal article","","","","","","","","","Mechanical, Maritime and Materials Engineering","","Materials and Manufacturing","","",""
"uuid:7dec2a90-0122-4713-b13c-049c323f6ddd","http://resolver.tudelft.nl/uuid:7dec2a90-0122-4713-b13c-049c323f6ddd","Operating windows for early evaluation of the applicability of advanced reactive distillation technologies","Pazmiño-Mayorga, Isabel (The University of Manchester); Jobson, Megan (The University of Manchester); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","Advanced reactive distillation technologies (ARDT) are often overlooked during process synthesis due to their complexity. This work proposes the use of operating windows with additional features to identify suitable operating limits for ARDT. Data needed to construct the operating windows are thermodynamic properties, kinetic parameters, constraints of materials and experimental methods, and heuristics. In addition, two new concepts are proposed to represent complex features: representative components and a sliding window. Results include the identification of suitable operating limits for ARDT to help assess their feasibility early in process design. The proposed approach is demonstrated by case studies. Methyl acetate production can be carried out at low pressures (0.5–3.6 atm), while lactic acid purification requires vacuum conditions (0.3–0.8 atm) to avoid thermal degradation. Tert-amyl methyl ether production was evaluated in two scenarios where the effect of side reactions is evidenced in a reduction of the reaction window due temperature limits to favour the main reaction over side reaction. This study is the first to evaluate advanced reactive distillation technologies using a graphical representation in an operating window to aid process synthesis, where the results provide key selection insights.","Operating windows; Process intensification; Process synthesis; Reactive distillation","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:c25f95e0-fc66-478d-91b3-f5b26d28c55f","http://resolver.tudelft.nl/uuid:c25f95e0-fc66-478d-91b3-f5b26d28c55f","Influence of process-based, stochastic and deterministic methods for representing heterogeneity in fluvial geothermal systems","Major, Márton (Aarhus University); Daniilidis, Alexandros (TU Delft Reservoir Engineering; University of Geneva); Hansen, Thomas Mejer (Aarhus University); Khait, M. (TU Delft Reservoir Engineering; Stone Ridge Technology); Voskov, D.V. (TU Delft Reservoir Engineering; Stanford University)","","2023","Focus is on comparing stochastic, process-based and deterministic methods for modelling heterogeneity in hydraulic properties of fluvial geothermal reservoirs. Models are considered a generalized representation of a fluvial sequence in the upper part of the Gassum Formation in northern Denmark. Two ensemble realizations of process-based and stochastic heterogeneity were simulated using the state-of-the-art numerical modelling software, Delft Advanced Research Terra Simulator (DARTS), to assess differences on a statistically relevant sample size. Simulator settings were optimized to achieve two orders of magnitude improvement in simulation time. Our general findings show that the stochastic and process-based methods produce nearly identical results in terms of predicted breakthrough time and production temperature decline for high net-to-gross ratios (N/G). Simple homogenous and layered models overestimate breakthrough and underestimate temperature decline. More complex representation of facies in process-based models show smaller variance in results and stay within the limits of ensemble runs compared to simpler facies representation. Results indicate that a multivariate Gaussian based stochastic representation of heterogeneity provides comparable thermal response to a process-based model in fluvial systems of similar quality.","Directuse; Geothermal; Heterogeneity; Process-based; Stochastic; Thermal response","en","journal article","","","","","","","","","","","Reservoir Engineering","","",""
"uuid:161d9c30-204a-4758-a6b4-c5ba8fefc354","http://resolver.tudelft.nl/uuid:161d9c30-204a-4758-a6b4-c5ba8fefc354","Dirichlet form analysis of the Jacobi process","Grothaus, Martin (Technische Universität Kaiserslautern); Sauerbrey, M. (TU Delft Analysis)","","2023","We construct and analyze the Jacobi process – in mathematical biology referred to as Wright–Fisher diffusion – using a Dirichlet form. The corresponding Dirichlet space takes the form of a Sobolev space with different weights for the function itself and its derivative and can be rewritten in a canonical form for strongly local Dirichlet forms in one dimension. Additionally to the statements following from the general theory on these forms, we obtain orthogonal decompositions of the Dirichlet space, derive Sobolev embeddings, verify functional inequalities of Hardy type and analyze the long time behavior of the associated semigroup. We deduce corresponding properties of the Markov process and show that it is up to minor technical modifications a solution to the Jacobi SDE. We also provide uniqueness statements for this SDE, such that properties of general solutions follow.","Dirichlet form; Hypergeometric functions; Jacobi process; Wright–Fisher diffusion","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Analysis","","",""
"uuid:f703f992-f444-42cb-88b1-9c81ae1109d6","http://resolver.tudelft.nl/uuid:f703f992-f444-42cb-88b1-9c81ae1109d6","A systematic review and comparison of automated tools for quantification of fibrous networks","de Vries, J.J. (Erasmus MC); Laan, Daphne M. (Erasmus MC); Frey, F.F.F. (TU Delft BN/Timon Idema Lab); Koenderink, G.H. (TU Delft BN/Gijsje Koenderink Lab); de Maat, Moniek P.M. (Erasmus MC)","","2023","Fibrous networks are essential structural components of biological and engineered materials. Accordingly, many approaches have been developed to quantify their structural properties, which define their material properties. However, a comprehensive overview and comparison of methods is lacking. Therefore, we systematically searched for automated tools quantifying network characteristics in confocal, stimulated emission depletion (STED) or scanning electron microscopy (SEM) images and compared these tools by applying them to fibrin, a prototypical fibrous network in thrombi. Structural properties of fibrin such as fiber diameter and alignment are clinically relevant, since they influence the risk of thrombosis. Based on a systematic comparison of the automated tools with each other, manual measurements, and simulated networks, we provide guidance to choose appropriate tools for fibrous network quantification depending on imaging modality and structural parameter. These tools are often able to reliably measure relative changes in network characteristics, but absolute numbers should be interpreted with care. Statement of significance: Structural properties of fibrous networks define material properties of many biological and engineered materials. Many methods exist to automatically quantify structural properties, but an overview and comparison is lacking. In this work, we systematically searched for all publicly available automated analysis tools that can quantify structural properties of fibrous networks. Next, we compared them by applying them to microscopy images of fibrin networks. We also benchmarked the automated tools against manual measurements or synthetic images. As a result, we give advice on which automated analysis tools to use for specific structural properties. We anticipate that researchers from a large variety of fields, ranging from thrombosis and hemostasis to cancer research, and materials science, can benefit from our work.","Fibrin; Fibrous networks; Image processing; Microscopy; Systematic review","en","journal article","","","","","","","","","","","BN/Timon Idema Lab","","",""
"uuid:dc7f8f99-53e1-4fda-a9c7-0bb61ae792e7","http://resolver.tudelft.nl/uuid:dc7f8f99-53e1-4fda-a9c7-0bb61ae792e7","An initial study of interference coloration for quantifying the texture and fabric of ice","Owen, C.C. (TU Delft Offshore Engineering); Hendrikse, H. (TU Delft Offshore Engineering)","","2023","The manual application of universal (Rigsby) stage techniques is commonly used to determine the fabric of thin sections of ice viewed with crossed-polarized light. This process can require hours of focus in cold conditions to identify the c-axis of each grain in a thin section. Automated ice texture and fabric methods of several forms exist but are rarely implemented beyond the field of glaciology. The present study introduces a method based on the theory of interference coloration for automated ice texture and quarter fabric analysis by using in-plane conventional photography of an ice thin section as input. The method is compatible with universal stages and polariscopes, and is not restricted by the planar-face dimensions of the thin section, allowing for thin section analysis of any size when sufficient digital camera resolution is available. Light source color temperature and chromatic adaptation are considered in the interference coloration theory, and ice fabrics are simulated for reference in identifying ice types. Sample thin section texture and quarter fabric analyses from freshwater lake and laboratory-grown ice are presented to demonstrate the applications of the method. The method is compared with the Rigsby stage technique, which yielded mean (standard deviation of) azimuth and inclination errors of 2.9 (1.0) and 11.5 (8.0) degrees, respectively, thereby demonstrating accuracy sufficient for quantifying quarter fabrics when considering a mean standard deviation in inclination of 5.4 degrees with the Rigsby stage technique.","Birefringence; c-axis; Grain boundary; Ice microstructure; Image processing","en","journal article","","","","","","","","","","","Offshore Engineering","","",""
"uuid:3befb97c-e972-4615-a2a1-276266ad098b","http://resolver.tudelft.nl/uuid:3befb97c-e972-4615-a2a1-276266ad098b","Nanowire-based integrated photonics for quantum information and quantum sensing","Chang, J. (TU Delft QN/Groeblacher Lab; Kavli institute of nanoscience Delft); Gao, Jun (AlbaNova University Center); Esmaeil Zadeh, I.Z. (TU Delft ImPhys/Esmaeil Zadeh group); Elshaari, Ali W. (AlbaNova University Center); Zwiller, Val (AlbaNova University Center)","","2023","At the core of quantum photonic information processing and sensing, two major building pillars are single-photon emitters and single-photon detectors. In this review, we systematically summarize the working theory, material platform, fabrication process, and game-changing applications enabled by state-of-the-art quantum dots in nanowire emitters and superconducting nanowire single-photon detectors. Such nanowire-based quantum hardware offers promising properties for modern quantum optics experiments. We highlight several burgeoning quantum photonics applications using nanowires and discuss development trends of integrated quantum photonics. Also, we propose quantum information processing and sensing experiments for the quantum optics community, and future interdisciplinary applications.","epitaxial quantum dots; nanowires; photonics integrated circuits; quantum information processing; quantum sensing; superconducting nanowire single photon detector","en","review","","","","","","","","","","","QN/Groeblacher Lab","","",""
"uuid:f886a863-6e06-4af2-a0c9-8a5217259436","http://resolver.tudelft.nl/uuid:f886a863-6e06-4af2-a0c9-8a5217259436","Moral rhetoric in discrete choice models: a Natural Language Processing approach","Szép, T. (TU Delft Transport and Logistics); van Cranenburgh, S. (TU Delft Transport and Logistics); Chorus, C.G. (TU Delft Industrial Design Engineering; TU Delft Engineering, Systems and Services)","","2023","This paper proposes a new method to combine choice- and text data to infer moral motivations from people’s actions. To do this, we rely on moral rhetoric, in other words, extracting moral values from verbal expressions with Natural Language Processing techniques. We use moral rhetoric based on a well-established moral, psychological theory called Moral Foundations Theory. We use moral rhetoric as input in Discrete Choice Models to gain insights into moral behaviour based on people’s words and actions. We test our method in a case study of voting and party defection in the European Parliament. Our results indicate that moral rhetoric have significant explanatory power in modelling voting behaviour. We interpret the results in the light of political science literature and propose ways for future investigations.","Discrete choice models; Moral Foundations Theory; Moral rhetoric; Natural Language Processing","en","journal article","","","","","","","","","Industrial Design Engineering","Engineering, Systems and Services","Transport and Logistics","","",""
"uuid:7bf8b85b-d827-445b-aaac-9db6b8110ef1","http://resolver.tudelft.nl/uuid:7bf8b85b-d827-445b-aaac-9db6b8110ef1","Uncertainties and their treatment in the quantitative risk assessment of domino effects: Classification and review","Xu, Y. (TU Delft Industrial Design Engineering); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven); Yang, M. (TU Delft Safety and Security Science; Universiti Teknologi Malaysia; University of Tasmania); Yuan, S. (TU Delft Safety and Security Science); Chen, Chao (Southwest Petroleum University)","","2023","Domino accidents are typical low-frequency and high-consequence events in chemical process industries. Applying quantitative risk assessment (QRA) in domino accident assessment is challenging due to the uncertainties in the escalation process. Meanwhile, the outcomes of QRA are subject to a certain degree of unreliability due to the inappropriate representation of uncertainty. This paper reviews the literature in the field of QRA of domino accidents that may happen in the chemical process industries. Firstly, the sources of uncertainty in risk assessment of domino effects are identified and categorized based on a fundamental structure of uncertainty and a QRA framework. Furthermore, the current methodologies and approaches applied for handling various uncertainties (input uncertainty, model parameter uncertainty, and model structure uncertainty) in the QRA related to domino effects are reviewed. Based on the literature review results, current challenges with respect to uncertainty handling in QRA of domino accidents are discussed, and recommendations for future research are given before the conclusions are presented. This study helps researchers to get insights into the interface between uncertainty fundamentals and the QRA framework and the current status of uncertainty handling in the QRA of domino effects. Furthermore, this study promotes the development of new approaches for handling uncertainty in domino accident analysis.","Chemical process industry; Domino effects; Quantitative risk assessment; Uncertainty fundamentals; Uncertainty handling","en","journal article","","","","","","","","","Industrial Design Engineering","","Safety and Security Science","","",""
"uuid:60f03de2-2dac-4792-b307-7f42a467d6a2","http://resolver.tudelft.nl/uuid:60f03de2-2dac-4792-b307-7f42a467d6a2","Inference and dynamic decision-making for deteriorating systems with probabilistic dependencies through Bayesian networks and deep reinforcement learning","Morato, P. G. (Université de Liège); Andriotis, C. (TU Delft Architectural Technology); Papakonstantinou, K. G. (The Pennsylvania State University); Rigo, P. (Université de Liège)","","2023","In the context of modern engineering, environmental, and societal concerns, there is an increasing demand for methods able to identify rational management strategies for civil engineering systems, minimizing structural failure risks while optimally planning inspection and maintenance (I&M) processes. Most available methods simplify the I&M decision problem to the component level, often assuming statistical, structural, or cost independence among components, due to the computational complexity associated with global optimization methodologies under joint system-level state descriptions. In this paper, we propose an efficient algorithmic framework for inference and decision-making under uncertainty for engineering systems exposed to deteriorating environments, providing optimal management strategies directly at the system level. In our approach, the decision problem is formulated as a factored partially observable Markov decision process, whose dynamics are encoded in Bayesian network conditional structures. The methodology can handle environments under equal or general, unequal deterioration correlations among components, through Gaussian hierarchical structures and dynamic Bayesian networks, decoupling the originally joint system state space to component networks conditional on shared random variables. In terms of policy optimization, we adopt a deep decentralized multi-agent actor-critic (DDMAC) reinforcement learning approach, in which the policies are approximated by actor neural networks guided by a critic network. By including deterioration dependence in the simulated environment, and by formulating the cost model at the system level, DDMAC policies intrinsically consider the underlying system-effects. This is demonstrated through numerical experiments conducted for both a 9-out-of-10 system and a steel frame under fatigue deterioration. Results demonstrate that DDMAC policies offer substantial benefits when compared to state-of-the-art heuristic approaches. The inherent consideration of system-effects by DDMAC strategies is also interpreted based on the learned policies.","Decision analysis; Deep reinforcement learning; Dynamic Bayesian networks; Infrastructure management; Partially observable Markov decision processes; System reliability analysis","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-08-11","","","Architectural Technology","","",""
"uuid:fab42532-d3b0-4be2-ba63-531dbb2697be","http://resolver.tudelft.nl/uuid:fab42532-d3b0-4be2-ba63-531dbb2697be","Monitoring and modeling dispersal of a submerged nearshore berm at the mouth of the Columbia River, USA","Stevens, Andrew W. (Pacific Coastal and Marine Science Center); Moritz, Hans R. (U.S. Army Corps of Engineers); Elias, Edwin P.L. (Deltares); Gelfenbaum, Guy R. (Pacific Coastal and Marine Science Center); Ruggiero, Peter R. (Oregon State University); Pearson, S.G. (TU Delft Coastal Engineering; Deltares); McMillan, James M. (U.S. Army Corps of Engineers); Kaminsky, George M. (Washington State Department of Ecology)","","2023","A submerged, low-relief nearshore berm was constructed in the Pacific Ocean near the mouth of the Columbia River, USA, using 216,000 m3 of sediment dredged from the adjacent navigation channel. The material dredged from the navigation channel was placed on the northern flank of the ebb-tidal delta in water depths between 12 and 15 m and created a distinct feature that could be tracked over time. Field measurements and numerical modeling were used to evaluate the transport pathways, time scales, and physical processes responsible for dispersal of the berm and evaluate the suitability of the location for operational placement of dredged material to enhance the sediment supply to eroding beaches onshore of the placement site. Repeated multibeam bathymetric surveys characterized the initial berm morphology and dispersion of the berm between September 22, 2020, and March 10, 2021. During this time, the volume of sediment within the berm decreased by about 40%to 127,000 m3, the maximum height decreased by almost 60%, and the center of the deposit shifted onshore over 200 m. Observations of berm morphology were compared with predictions from a three-dimensional hydrodynamic and sediment transport model application to refine poorly constrained model input parameters including sediment transport coefficients, bed schematization, and grain size. The calibrated sediment transport model was used to predict the amount, timing, and direction of transport outside of the observed survey area. Model simulations predicted that tidal currents were weak in the vicinity of the berm and wave processes including enhanced bottom stresses and asymmetric bottom orbital velocities resulted in dominant onshore movement of sediment from the berm toward the coastline. Roughly 50% of the berm volume was predicted to disperse away from the initial placement site during the 169 day hindcast. Between 9 and 17% of the initial volume of the berm was predicted to accumulate along the shoreface of a shoreline reach experiencing chronic erosion directly onshore of the placement site. Scenarios exploring alternate placement locations suggested that the berm was relatively effective in enhancing the sediment supply along the eroding coastline north of the inlet. The transferable monitoring and modeling framework developed in this study can be used to inform implementation of strategic nearshore placements and regional sediment management in complex, high-energy coastal environments elsewhere.","Delft3D; Nearshore berm; Process-based modeling; Sediment transport","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:bcd84aae-c688-43b7-a41a-6744cffa8bc1","http://resolver.tudelft.nl/uuid:bcd84aae-c688-43b7-a41a-6744cffa8bc1","Polynomial Chaos Expansion-Based Enhanced Gaussian Process Regression for Wind Velocity Field Estimation from Aircraft-Derived Data","Marinescu, M. (TU Delft Control & Simulation; Universidad Rey Juan Carlos); Olivares, Alberto (Universidad Rey Juan Carlos); Staffetti, Ernesto (Universidad Rey Juan Carlos); Sun, Junzi (TU Delft Control & Simulation)","","2023","This paper addresses the problem of spatiotemporal wind velocity field estimation for air traffic management applications. Using data obtained from aircraft, the eastward and northward components of the wind velocity field inside a specific air space are calculated as functions of time. Both short-term wind velocity field forecasting and wind velocity field reconstruction are performed. Wind velocity data are indirectly obtained from the states of the aircraft flying in the relevant airspace, which are broadcast by the ADS-B and Mode-S aircraft surveillance systems. The wind velocity field is estimated by combining two data-driven techniques: the polynomial chaos expansion and the Gaussian process regression. The former approximates the global behavior of the wind velocity field, whereas the latter approximates the local behavior. The eastward and northward wind components of the wind velocity field must be estimated, which causes the problem to be a multiple-output problem. This method enables the estimation of the wind velocity field at any spatiotemporal location using wind velocity observations from any spatiotemporal location, eliminating the need for spatial and temporal grids. Moreover, since the method proposed in this article allows for the probability distributions of the estimates to be computed, it causes the computation of the confidence intervals to be possible. Furthermore, since the method presented in this paper allows for data assimilation, it can be used online to continuously update the wind velocity field estimation. The method is tested on different wind scenarios and different training-test data configurations, by means of which the consistency between the results of the wind velocity field forecasting and the wind velocity field reconstruction is checked. Finally, the ERA5 meteorological reanalysis data of the European Centre for Medium-Range Weather Forecasts are used to validate the proposed technique. The results show that the method is able to reliably estimate the wind velocity field from aircraft-derived data.","ADS-B; air traffic management; Gaussian process regression; Mode S; polynomial chaos expansion; wind velocity field estimation","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:ed93d2c3-1c0e-4899-86bf-78fbe693ef20","http://resolver.tudelft.nl/uuid:ed93d2c3-1c0e-4899-86bf-78fbe693ef20","The ultimate performance of the Rasnik 3-point alignment system","van der Graaf, H. (TU Delft ImPhys/Hoogenboom group; Nikhef); Bertolini, Alessandro (Nikhef); van Heijningen, Joris (Université Catholique de Louvain); Bouwens, Bram (Amsterdam Scientific Instruments, Amsterdam); de Gaay Fortman, Nelson (AMOLF); van der Reep, Tom (Universiteit van Amsterdam); Otemann, Lennart (Student TU Delft; Nikhef)","","2023","The Rasnik system is a 3-point optical displacement monitor with sub-nanometer precision. The CCD-Rasnik alignment system was developed in 1993 for monitoring the alignment of the muon chambers of the ATLAS Muon Spectrometer at CERN. Since then, the development has continued as new CMOS imaging pixel chips became available. In this work the system processes and parameters that limit the precision are studied. We conclude that the spatial resolution of Rasnik is only limited by the quantum fluctuations of the photon flux arriving at the pixels of the image sensor. The results of two Rasnik systems are compared to results from simulations, which are in good agreement. The best spatial resolution obtained was 7 pm/Hz. Finally, some applications of high-precision Rasnik systems are set out.","2D displacement monitoring; Detection of gravitational waves; Detector alignment and calibration methods; Image processing; Interferometry; Length sensing and monitoring; Quantum fluctuations; Seismic sensors; Shot noise","en","journal article","","","","","","","","","","","ImPhys/Hoogenboom group","","",""
"uuid:1fc79eac-7291-4e92-8ff2-1a9169288c2b","http://resolver.tudelft.nl/uuid:1fc79eac-7291-4e92-8ff2-1a9169288c2b","Measurement and modelling of dynamic fluid saturation in carbon reinforcements","Teixidó, Helena (Swiss Federal Institute of Technology); Broggi, G.C. (TU Delft Aerospace Manufacturing Technologies; Swiss Federal Institute of Technology); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies); Michaud, Véronique (Swiss Federal Institute of Technology)","","2023","We propose a methodology to monitor the progressive saturation of a non-translucent unidirectional carbon fabric stack through its thickness by means of X-ray radiography and extract the dynamic saturation curves using image analysis. Four constant flow rate injections with increasing flow speed were carried out. These were simulated by a numerical two-phase flow model for both capillary and viscous leading flow conditions. The hydraulic functions describing pressure and relative permeability versus saturation were determined by fitting the saturation curves using a heuristic optimization routine. As the fluid velocity increases and the flow regime at the flow front shifts from capillary to hydrodynamically driven, the resulting capillary pressure curves for a given saturation level are shifted to higher values, from negative to positive. These as well as the capillary pressure calculated from the pressure drop within the unsaturated region of the fabric correlate well with a corresponding change in the averaged dynamic contact angle.","A. Carbon fibers; B. Wettability; C. Process Modeling; E. Resin Flow","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:118b8b7d-c127-405c-a915-963aeaf90525","http://resolver.tudelft.nl/uuid:118b8b7d-c127-405c-a915-963aeaf90525","Influence of environmental conditions on accumulated polyhydroxybutyrate in municipal activated sludge","Pei, R. (TU Delft BT/Environmental Biotechnology; Wetsus, Centre for Sustainable Water Technology); Tarek-Bahgat, N. (Wetsus, Centre for Sustainable Water Technology); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Kleerebezem, R. (TU Delft BT/Environmental Biotechnology); Werker, A. (Wetsus, Centre for Sustainable Water Technology)","","2023","Poly(3-hydroxybutyrate) (PHB) was accumulated in full-scale municipal waste activated sludge at pilot scale. After accumulation, the fate of the PHB-rich biomass was evaluated over two weeks as a function of initial pH (5.5, 7.0 and 10), and incubation temperature (25, 37 and 55 °C), with or without aeration. PHB became consumed under aerobic conditions as expected with first order rate constants in the range of 0.19 to 0.55 d−1. Under anaerobic conditions, up to 63 percent of the PHB became consumed within the first day (initial pH 7, 55 °C). Subsequently, with continued anaerobic conditions, the polymer content remained stable in the biomass. Degradation rates were lower for acidic anaerobic incubation conditions at a lower temperature (25 °C). Polymer thermal properties were measured in the dried PHB-rich biomass and for the polymer recovered by solvent extraction using dimethyl carbonate. PHB quality changes in dried biomass, indicated by differences in polymer melt enthalpy, correlated to differences in the extent of PHB extractability. Differences in the expressed PHB-in-biomass melt enthalpy that correlated to the polymer extractability suggested that yields of polymer recovery by extraction can be influenced by the state or quality of the polymer generated during downstream processing. Different post-accumulation process biomass management environments were found to influence the polymer quality and can also influence the extraction of non-polymer biomass. An acidic post-accumulation environment resulted in higher melt enthalpies in the biomass and, consequently, higher extraction efficiencies. Overall, acidic environmental conditions were found to be favourable for preserving both quantity and quality after PHB accumulation in activated sludge.","Biopolymer; Downstream processing; Poly(3-hydroxybutyrate)(PHB); Polyhydroxyalkanoate (PHA); Polymer properties; Waste activated sludge","en","journal article","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:f4bc96ef-1738-479c-bed6-b8a4f85d2cf1","http://resolver.tudelft.nl/uuid:f4bc96ef-1738-479c-bed6-b8a4f85d2cf1","Learning from flowsheets: A generative transformer model for autocompletion of flowsheets","Vogel, G.C. (TU Delft ChemE/Product and Process Engineering); Schulze Balhorn, L. (TU Delft ChemE/Product and Process Engineering); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering)","","2023","We propose a novel method enabling autocompletion of chemical flowsheets. This idea is inspired by the autocompletion of text. We represent flowsheets as strings using the text-based SFILES 2.0 notation and learn the grammatical structure of the SFILES 2.0 language and common patterns in flowsheets using a transformer-based language model. We pre-train our model on synthetically generated flowsheet topologies to learn the flowsheet language grammar. Then, we fine-tune our model in a transfer learning step on real flowsheet topologies. Finally, we use the trained model for causal language modeling to autocomplete flowsheets. Eventually, the proposed method can provide chemical engineers with recommendations during interactive flowsheet synthesis. The results demonstrate a high potential of this approach for future AI-assisted process synthesis but also reveal the limitations at the present state and the next steps that need to be taken to deploy this technique in realistic flowsheet synthesis scenarios.","Flowsheet completion; Flowsheet synthesis; Generative transformer model; Natural language processing; SFILES 2.0","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:51cb1c9e-69de-48da-8ff1-b86e897db2c3","http://resolver.tudelft.nl/uuid:51cb1c9e-69de-48da-8ff1-b86e897db2c3","Perceived Appropriateness: A Novel View for Remediating Perceived Inappropriate Robot Navigation Behaviors","Zhou, Y. (TU Delft Internet of Things; TU Delft Industrial Design Engineering)","","2023","Robots navigating in social environments inevitably exhibit behavior perceived as inappropriate by people, which they will repeat unless they are aware of them; hindering their social acceptance. This highlights the importance of robots detecting and adapting to the perceived appropriateness of their behavior, in line with what we found in a systematic literature review. Therefore, we have conducted experiments (both outdoor and indoor) to understand the perceived appropriateness of robot social navigation behavior, based on which we collected a dataset and developed a machine learning model for detecting such perceived appropriateness. To investigate the usefulness of such information and inspire robot adaptive navigation behavior design, we will further conduct aWoZ study to understand how trained human operators adapt robot behavior to people's feedback. In all, this work will enable robots to better remediate their inappropriate behavior, thus improving their social acceptance.","Adaptive Behavior; Human-Robot Interaction; Perceived Appropriateness; Social Navigation; Social Signal Processing","en","conference paper","IEEE","","","","","","","2023-09-13","Industrial Design Engineering","","Internet of Things","","",""
"uuid:3436c42b-024e-4a03-8609-9e35cebf3726","http://resolver.tudelft.nl/uuid:3436c42b-024e-4a03-8609-9e35cebf3726","Stability of backward stochastic differential equations: the general Lipschitz case","Papapantoleon, A. (TU Delft Applied Probability; National Technical University of Athens; Foundation for Research and Technology - Hellas (FORTH)); Possamaï, Dylan (ETH Zürich); Saplaouras, Alexandros (National Technical University of Athens)","","2023","In this paper, we obtain stability results for backward stochastic differential equations with jumps (BSDEs) in a very general framework. More specifically, we consider a convergent sequence of standard data, each associated to their own filtration, and we prove that the associated sequence of (unique) solutions is also convergent. The current result extends earlier contributions in the literature of stability of BSDEs and unifies several frameworks for numerical approximations of BSDEs and their implementations.","BSDE; nonlinear martingale representations; processes with jumps; random time horizon; stability; stochas-tically discontinuous martingales; stochastic Lipschitz generator","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:7ead94d8-ef32-4e86-9ab1-e79262142878","http://resolver.tudelft.nl/uuid:7ead94d8-ef32-4e86-9ab1-e79262142878","Thermal management in radical induced cationic frontal polymerisation for optimised processing of fibre reinforced polymers","Staal, Jeroen (Swiss Federal Institute of Technology); Smit, Edgar (Swiss Federal Institute of Technology); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies); Michaud, Véronique (Swiss Federal Institute of Technology)","","2023","Radical induced cationic frontal polymerisation (RICFP) is considered a promising low energy method for processing of fibre reinforced polymers (FRPs). Optimisation of the local heat balance between reinforcement, epoxy resin and the surrounding mould is required to pave the way for its adaptation to an industrial processing method for high volume fraction structural fibre reinforced composites. In this work, we investigate several methods to control the governing heat balance in RICFP-processing of FRPs. Heat generation was controlled by tuning the initiator concentration while limitation of heat losses using highly insulating moulds was found beneficial to the front characteristics and resulting curing degrees. An optimised mould configuration allowed for self-sustaining RICFP in FRPs with fibre volume fractions (Vfs) up to 45.8%, exceeding previously reported maxima of similar systems. A process window was moreover established relating the Vf and required heat generation to the potential formation of a self-sustaining or supported front.","Fiber reinforced polymers; Frontal polymerisation; Out-of-autoclave processing; Polymer-matrix composites (PMCs)","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:0291c1f7-d746-46d1-8264-b02e05ff85b2","http://resolver.tudelft.nl/uuid:0291c1f7-d746-46d1-8264-b02e05ff85b2","A modified bias-extension test method for the characterisation of intra-ply shear deformability of hybrid metal-composite laminates","Liu, S. (TU Delft Aerospace Manufacturing Technologies); Sinke, J. (TU Delft Aerospace Manufacturing Technologies); Dransfeld, C.A. (TU Delft Aerospace Manufacturing Technologies)","","2023","The bias-extension test is one of the test methods to characterise the intra-ply shear behaviour of continuous fibre reinforced composites including fabrics and unidirectional (UD) materials. For the determination of the major mechanical properties of metals, often a uniaxial tensile test is used. Combination of these two methods for the shear deformation of hybrid metal-composite laminates is proposed comparing the method for cross-plied unidirectional prepregs and woven fabric prepregs. The effects of material constituent, shear rate, preheat temperature and normal pressure on the intra-ply shear behaviour are investigated. The results indicate that the material constituents and the frictional responses depending on processing parameters play a critical role in the shear characterisation of the hybrid laminate. The shear angle measurement at four typical strains demonstrates that the support of metal layers improves the shear deformability by delaying the onset of fibre wrinkling. This modified intra-ply shear test contributes to a better understanding of the process design for wet (uncured) hybrid metal-composite laminate manufacturing.","Intra-ply shear angles; Material constituents; Metal-composite laminates; Modified bias-extension test; Processing parameters","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:d5d829c5-9c50-473e-86fd-90080d873043","http://resolver.tudelft.nl/uuid:d5d829c5-9c50-473e-86fd-90080d873043","Microstructure and Dynamic Mechanical Behavior of Thermomechanically Rolled 3Mn–Al and 5Mn–Al Sheet Steels","Morawiec, Mateusz (Silesian University of Technology); Grajcar, Adam (Silesian University of Technology); Krawczyk, Jakub (Wrocław University of Technology); Gronostajski, Zbigniew (Wrocław University of Technology); Petrov, R.H. (TU Delft Team Maria Santofimia Navarro; Universiteit Gent)","","2023","The comparison of the dynamic mechanical behavior and microstructure of two medium manganese sheet steels (3Mn–Al and 5Mn–Al) alloyed with aluminum is aimed. Mechanical properties under dynamic tensile loads are determined by means of rotary hammer dynamic tests at strain rates of 250 and 1000 s−1 and analyzed together with the results of static tensile test. It is found that the results are significantly affected by the variations in Mn content in the range from 3 to 5 wt%. In both steels, the tensile strength increases with increasing strain rate, but the variation in the strain rate range has a moderate effect on mechanical behavior. The highest ultimate tensile strength of 1475 MPa is measured in the 5Mn–Al steel, whereas the 3Mn–Al steel is characterized by better total elongation due to a larger fraction of retained austenite and more pronounced transformation-induced plasticity effect. The results show that the mechanical properties of 3Mn steel are more strain rate sensitive than those of 5Mn steel. The microstructural features are characterized qualitatively and quantitatively by X-ray diffraction, scanning electron microscopy, and electron back-scattered diffraction techniques.","dynamic deformation; dynamic recovery; medium-Mn steels; multiphase microstructures; thermomechanical processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-09-11","","","Team Maria Santofimia Navarro","","",""
"uuid:5217183c-5537-430c-9ee5-2cd61dbfe42d","http://resolver.tudelft.nl/uuid:5217183c-5537-430c-9ee5-2cd61dbfe42d","Sustainable Setups for the Biocatalytic Production and Scale-Up of Panthenyl Monoacyl Esters under Solvent-Free Conditions","Nieto, Susana (University of Murcia); Bernal, Juana M. (University of Murcia); Villa Aroca, R. (TU Delft BT/Biocatalysis; University of Murcia); Garcia-Verdugo, Eduardo (Universitat Jaume I); Donaire, Antonio (University of Murcia); Lozano, Pedro (University of Murcia)","","2023","A sustainable scaling-up process for the biocatalytic production of new bioactive provitamin-B5 monoacyl esters has been demonstrated. A solvent-free reaction protocol, based on the formation of eutectic mixtures between neat substrates, renders highly efficient direct esterification of free fatty acids (i.e., from C6 to C18 alkyl-chain length) with panthenol catalyzed by lipase. The scale-up from 0.5 to 500 g was evaluated by means of using several reaction systems (i.e., ultrasound assistance, orbital shaking, rotary evaporator, and mechanical stirring coupled to vacuum). For all reactor systems, the yield in panthenyl monoacyl esters was improved by increasing the length of the alkyl chain of the fatty acid (i.e., from 63% yield for panthenyl butyrate to 83% yield for panthenyl myristate). The best results (87-95% product yield, for all cases) were obtained upon a scale-up (50-500 g size) and when a vacuum system was coupled to the biocatalytic reaction unit. Under the optimized conditions, a 5-fold reduction of the amount of biocatalysts with respect to reactors without vacuum was achieved. The recovery and reuse of the immobilized enzyme for five operation cycles were also demonstrated. Finally, different metrics have been applied to assess the greenness of the solvent-free biocatalytic synthesis of panthenyl monoesters here reported.","biocatalysis; panthenol esters; scaling-up; solvent-free; sustainable processes","en","journal article","","","","","","","","","","","BT/Biocatalysis","","",""
"uuid:7e89aebf-5db7-4400-93b7-8c7da1a575eb","http://resolver.tudelft.nl/uuid:7e89aebf-5db7-4400-93b7-8c7da1a575eb","The quest for a better solvent for the direct hydration of cyclohexene: From molecular screening to process design","Wang, Xiaoda (Fuzhou University); Zhao, Yuqing (Fuzhou University); Han, Lumin (Fuzhou University); Li, Ling (Fuzhou University); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","Cylcohexanol is an essential bulk chemical that can be produced via cyclohexene hydration, a liquid-liquid two-phase reaction that is limited by the low reaction rate and the equilibrium conversion. Adding an appropriate solvent is the most promising method to break through these limitations. However, in previous works the solvent was almost blindly selected without a global consideration. In this work, a rational multiscale method is proposed for the effective selection of an economical and sustainable solvent for the direct hydration of cyclohexene. At the molecular scale, liquid-liquid phase equilibrium was estimated using group contribution methods to rapidly screen the potential solvent candidates from a range of organics, based on the partition coefficient. At the reactor scale, the candidates were experimentally investigated to pick out the solvents that could significantly improve the conversion, without introducing side reactions or deactivating the catalyst. At the process scale, the total annual cost (TAC), CO2 emission, and other metrics were calculated to evaluate the eco-efficiency of all solvents. Using this multi-scale method, acetophenone was selected as an eco-efficient solvent from over 100 organics, resulting in the reduction of TAC by 8 % and CO2 emission by 17 % in the production process. Using acetophenone also led to the increase of cyclohexanol yield from 12.3 % to 27.6 % without the occurrence of side reactions and catalyst deactivation.","Cyclohexene; Hydration; Multi-scale; Process analysis; Solvent selection","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:fa331fbc-6f45-4e3d-9da0-6a326f7c2b5e","http://resolver.tudelft.nl/uuid:fa331fbc-6f45-4e3d-9da0-6a326f7c2b5e","Investigation on the technical performance and workability of hot-melting road marking materials using for the high-altitude area","Lu, Jing (Anhui Open University); Zhang, Min (Henan University); Feng, Jianlin (Wuhan Institute of Technology); Gao, Y. (TU Delft Pavement Engineering); Yang, Ye (Wuhan Institute of Technology); Li, Yuanyuan (Wuhan Institute of Technology); Li, Linglin (University of Nottingham)","","2023","The special environment conditions in high altitude areas leads to serious cracking and peeling of road hot-melt marking coatings. In order to improve the durability of marking paint, a durable hot-melt marking paint was designed by modifying the paint with toughening-agent, rheological agent, and anti-aging agent. The modification mechanism of the modified coating was revealed through TG and FTIR analysis. The low-temperature anti cracking, adhesion, and anti-aging properties of the modified coating were studied by low-temperature bending test, interlaminar shear test and UV aging test. Besides this, the workability of modified coatings was tested onsite. The results showed that the mass loss rate of the rheological agent is 0.15% at 240 °C, the modified coating had good thermal stability within the mixing temperature range of the coating. There was no chemical change between the modifier and the coating, which was a physical blend. The modified marking paint had good fluidity, softening point and drying time, but its compressive strength was slightly reduced. With the increase of the content of the toughener, the low temperature crack resistance and adhesion of the marking coating gradually increase. When the toughener is 5%, the flexural tensile strain of the marking coating beam increased by 79.1%, and the adhesion strength of asphalt concrete increased by 53.4%. The anti-aging agent can shield most of the UV radiation and improve the anti-aging property of the coating by about 30%. The field process validation achieved the expected results. The modifier coating has excellent low temperature crack resistance, adhesion, UV aging resistance, and other properties, and has good application prospects in harsh environment areas.","Modification mechanism; Process validation; Road marking paint; Road performance; Working performance","en","journal article","","","","","","","","","","","Pavement Engineering","","",""
"uuid:a8407194-88f2-43f6-9490-b751b2f1643a","http://resolver.tudelft.nl/uuid:a8407194-88f2-43f6-9490-b751b2f1643a","Synthesis and optimization of energy integrated advanced distillation sequences","Li, Q. (TU Delft ChemE/Product and Process Engineering; The University of Manchester); Finn, Adrian J. (Costain House, Manchester); Doyle, Stephen J. (The University of Manchester); Smith, Robin (The University of Manchester); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","This paper explores the basis on which reliable screening of distillation sequences for energy-efficient separation of zeotropic multicomponent mixtures can be carried out. A case study for the separation of natural gas liquids is used to demonstrate the approach. To solve this generic problem, a screening algorithm has been developed using optimization of a superstructure for sequence synthesis using shortcut models, in conjunction with a transportation algorithm for the synthesis of the heat integration arrangement. Different approaches for the inclusion of heat integration are explored and compared. The best few designs from this screening are then evaluated using rigorous simulation. It has been found that separation problems of the type explored can be screened reliably using shortcut distillation models in conjunction with the synthesis of heat exchanger network designs. Non-integrated designs using thermally coupled complex columns show much better performance than the corresponding designs using simple columns. However, once heat integration is included the difference between designs using complex columns and simple columns narrows significantly.","Distillation sequencing; Energy integration; Process design; Process optimization; Process synthesis","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:0ed0d0ed-19d1-4a95-b9c7-cd1444745130","http://resolver.tudelft.nl/uuid:0ed0d0ed-19d1-4a95-b9c7-cd1444745130","Improving state estimation through projection post-processing for activity recognition with application to football","Ciszewski, M.G. (TU Delft Statistics); Söhl, J. (TU Delft Statistics); Jongbloed, G. (TU Delft Statistics)","","2023","The past decade has seen an increased interest in human activity recognition based on sensor data. Most often, the sensor data come unannotated, creating the need for fast labelling methods. For assessing the quality of the labelling, an appropriate performance measure has to be chosen. Our main contribution is a novel post-processing method for activity recognition. It improves the accuracy of the classification methods by correcting for unrealistic short activities in the estimate. We also propose a new performance measure, the Locally Time-Shifted Measure (LTS measure), which addresses uncertainty in the times of state changes. The effectiveness of the post-processing method is evaluated, using the novel LTS measure, on the basis of a simulated dataset and a real application on sensor data from football. The simulation study is also used to discuss the choice of the parameters of the post-processing method and the LTS measure.","Activity recognition; Performance measures; Post-processing; Wearable sensors","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:2dbb6088-7131-44a2-9cc3-116a0856b5cc","http://resolver.tudelft.nl/uuid:2dbb6088-7131-44a2-9cc3-116a0856b5cc","Industrial experience in using cyclic distillation columns for food grade alcohol purification","Bedryk, Olesja (Maleta Cyclic Distillation LLC OU, Tallinn); Shevchenko, Alexander (National University of Food Technologies, Kyiv); Mishchenko, O. S. (Ukrainian Research Institute for Alcohol and Biotechnology of Food Products, Kyiv); Maleta, Vladimir N. (Maleta Cyclic Distillation LLC OU, Tallinn); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","This study provides novel details about the industrial use of cyclic distillation in the production of food-grade alcohol, which confirmed the theoretical predictions of increasing separation efficiency. Increasing the profitability of the production of ethanol food grade is primarily associated with an increase in product quality and reduction of energy costs per unit of production. One of the ways to solve these problems is to improve the ethanol purification technology by using cyclic distillation, which allows reduction of energy costs and higher productivity by removing impurities at a higher concentration (leading also to waste reduction). The purification of ethanol from impurities (head and intermediate type) is carried out in hydro-selection columns. Critically, the volatility of most components depends on the actual ethanol concentration on the stage. This study investigated the distribution of ethanol on the trays depending on the water feed stage to the column and showed the optimal distribution of hydro-selective water in an industrial column to allow the highest possible separation efficiency of components during cyclic distillation. A cyclic distillation column with 15 Maleta trays was superior in separation capacity and performance as compared to a traditional column with 50 bubble cap trays.","Cyclic distillation; Ethanol food-grade; Fluid separations; Industrial process; Process intensification","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:9f8a69fd-6754-4aaa-b92e-96e61c5267d1","http://resolver.tudelft.nl/uuid:9f8a69fd-6754-4aaa-b92e-96e61c5267d1","Advanced purification of isopropanol and acetone from syngas fermentation","Jankovic, T.J. (TU Delft BT/Bioprocess Engineering); Straathof, Adrie J.J. (TU Delft BT/Bioprocess Engineering); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","BACKGROUND: Isopropanol and acetone production by syngas fermentation is a promising alternative to conventional fossil carbon-dependent production. However, this alternative technology has not yet been scaled up to an industrial level owing to the relatively low product concentrations (about 5 wt% in total). This original research aims to develop cost-effective and energy-efficient processes for the recovery of isopropanol and acetone from highly dilute fermentation broth (>94 wt% water) for large-scale production (about 100 ktIPA+AC y−1). RESULTS: Vacuum distillation and pass-through distillation enhanced with heat pumps or multi-effect distillation were efficiently coupled with regular atmospheric distillation and extractive distillation in several innovative intensified downstream processes. Over 99.2% of isopropanol and 100% of acetone were recovered as high-purity end-products (>99.8 wt%). Advanced heat pumping (mechanical vapor recompression) and heat integration techniques were implemented to decrease total annual costs (0.109–0.137 USD kgIPA+AC−1), reduce energy requirements (1.348–2.043 kWth h kgIPA+AC−1) and lower CO2 emissions (0.067–0.191 kgCO2 kgIPA+AC−1), resulting in highly competitive recovery processes. CONCLUSION: The proposed three novel isopropanol and acetone recovery processes from dilute broth significantly contribute to the expansion of sustainable industrial fermentation. Furthermore, this original research is the first one to develop novel pass-through distillation technology for the complex isopropanol–acetone–water system. All the designed processes are highly economically competitive and environmentally viable. In addition to recovering efficiently both isopropanol and acetone, the designed downstream processes offer the possibility to enhance the fermentation process by recycling all the present microorganisms and reducing fresh-water requirements.","downstream processing; isopropanol and acetone; pass-through distillation; process integration; syngas fermentation","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:4d7d4264-a5ba-4001-8181-091b4505a948","http://resolver.tudelft.nl/uuid:4d7d4264-a5ba-4001-8181-091b4505a948","Improving Whispered Speech Recognition Performance Using Pseudo-Whispered Based Data Augmentation","Lin, Zhaofeng (Student TU Delft); Patel, T.B. (TU Delft Multimedia Computing); Scharenborg, O.E. (TU Delft Multimedia Computing)","","2023","Whispering is a distinct form of speech known for its soft, breathy, and hushed characteristics, often used for private communication. The acoustic characteristics of whispered speech differ substantially from normally phonated speech and the scarcity of adequate training data leads to low automatic speech recognition (ASR) performance. To address the data scarcity issue, we use a signal processing-based technique that transforms the spectral characteristics of normal speech to those of pseudo-whispered speech. We augment an End-to-End ASR with pseudo-whispered speech and achieve an 18.2 % relative reduction in word error rate for whispered speech compared to the baseline. Results for the individual speaker groups in the wTIMIT database show the best results for US English. Further investigation showed that the lack of glottal information in whispered speech has the largest impact on whispered speech ASR performance.","Error analysis; Databases; Conferences; Training data; Transforms; Data augmentation; Acoustics; Whispered speech; pseudo-whisper; end-to-end speech recognition; wTIMIT; signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-07-19","","","Multimedia Computing","","",""
"uuid:893f5f46-6a00-470f-acc5-a9f5265efdf9","http://resolver.tudelft.nl/uuid:893f5f46-6a00-470f-acc5-a9f5265efdf9","Method in their madness: Explaining how designers think and act through the cognitive co-evolution model","Cash, Philip (University of Northumbria); Gonçalves, M. (TU Delft Methodologie en Organisatie van Design); Dorst, Kees (University of Technology Sydney)","","2023","Designers often face situations where the only way forward is through the exploration of possibilities. However, there is a critical disconnect between understanding of how designer's think and act in such situations. We address this disconnect by proposing and testing (via protocol analysis) the cognitive co-evolution model. Our model comprises a new approach to co-evolutionary design theory by explaining both the progression of the process itself and the creation of design outputs via an interplay between metacognitive perceived uncertainty, cognition, and the external world. We thus connect explanations of how designers think with descriptions of how they act. We provide a foundation for connecting to other theories, models, and questions in design research via common links to cognition and metacognition.","co-evolution; creativity; design cognition; design process(es); design thinking","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:766538af-1e06-4354-9594-dfea130c3d83","http://resolver.tudelft.nl/uuid:766538af-1e06-4354-9594-dfea130c3d83","Photoelectrocatalytic based simultaneous removal of multiple organic micro-pollutants by using a visible light driven BiVO4 photoanode","Ali, A.Z. (TU Delft Sanitary Engineering); Jagannathan, Sadhna (Nijhuis Water Technology B.V); Bennani, Yasmina-Doekhi (Nijhuis Water Technology B.V); van der Hoek, J.P. (TU Delft Sanitary Engineering; Waternet); Spanjers, H. (TU Delft Sanitary Engineering)","","2023","In this research, photoelectrocatalytic (PEC) based advanced oxidation process (AOP) was studied for the removal of multiple OMPs through an oxidative mechanism. This study investigated the application of a BiVO4 photoanode in simultaneous removal of three selected OMPs: acetaminophen (ACT), benzotriazole (BTA) and propranolol (PRO). This study was carried out in demineralized water with a starting concentration of each organic micro-pollutant (OMP) at 45 μg L−1. In order to fabricate BiVO4 photoanodes, a facile and effective dip-coating method was used to deposit BiVO4 photocatalytic layers on fluorine doped tin oxide (FTO) substrate. UV–vis diffusive reflectance spectroscopy, x-ray diffraction (XRD), x-ray photoelectron spectroscopy (XPS) and scanning electron microscopy (SEM) confirmed the successful fabrication of porous BiVO4 photoanode having an absorbance edge at around 526 nm. The fabricated photoanode showed incident photon to current conversion efficiency (IPCE) of 9.23% (λmax=445 nm) under 1 Sun standard illumination. Application of the fabricated photoanodes for the simultaneous removal of ACT, PRO and BTA at an applied voltage of 1 V (vs Ag/AgCl) under solar simulated light resulted in 99% removal of both ACT and PRO, and 70% removal of BTA. The first order rate coefficients and half-life times of ACT and PRO were about three times higher than those of BTA.","Organic micro-pollutants; Advanced oxidation process; Photoelectrocatalysis; Dip-coating; BiVO4 photoanode","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:01890849-55c1-4481-9967-0e41cb582dec","http://resolver.tudelft.nl/uuid:01890849-55c1-4481-9967-0e41cb582dec","A review of methods on buildability quantification of extrusion-based 3D concrete printing: From analytical modelling to numerical simulation","Chang, Z. (TU Delft Materials and Environment); Chen, Y. (TU Delft Materials and Environment); Schlangen, E. (TU Delft Materials and Environment); Šavija, B. (TU Delft Materials and Environment)","","2023","Herein, different kinds of methods for buildability quantification of 3D concrete printing are reviewed, including experimental approaches, analytical modelling, and numerical simulations. A brief introduction on printing process is first given. This discusses the material properties in different stages. Material printability, which encompasses pumpability, extrudability and buildability, is then discussed. Subsequently, a brief review of the experimental and analytical models for buildability quantification is presented and they're discussed. An overview on the numerical tools for 3DCP is then given. These numerical models can quantify structural buildability and optimize the printing parameters, therefore, providing a more economical solution for buildability quantification. In the end, a summary and discussion on the limitations of numerical tools for buildability quantification are provided, as well as recommendations for their improvement.","3D concrete printing; Buildability quantification; Numerical analysis; Printing process","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:ad7580f5-d8fa-4df4-a91b-803b5e3538fa","http://resolver.tudelft.nl/uuid:ad7580f5-d8fa-4df4-a91b-803b5e3538fa","Large-scale magnetic field maps using structured kernel interpolation for Gaussian process regression","Menzen, C.M. (TU Delft Team Manon Kok); Fetter, Marnix (Student TU Delft); Kok, M. (TU Delft Team Manon Kok)","","2023","We present a mapping algorithm to compute large-scale magnetic field maps in indoor environments with approximate Gaussian process (GP) regression. Mapping the spatial variations in the ambient magnetic field can be used for 10-calization algorithms in indoor areas. To compute such a map, GP regression is a suitable tool because it provides predictions of the magnetic field at new locations along with uncertainty quantification. Because full GP regression has a complexity that grows cubically with the number of data points, approximations for GPs have been extensively studied. In this paper, we build on the structured kernel interpolation (SKI) framework, speeding up inference by exploiting efficient Krylov subspace methods. More specifically, we incorporate SKI with derivatives (D-SKI) into the scalar potential model for magnetic field modeling and compute both predictive mean and covariance with a complexity that is linear in the data points. In our simulations, we show that our method achieves better accuracy than current state-of-the-art methods on magnetic field maps with a growing mapping area. In our large-scale experiments, we construct magnetic field maps from up to 40000 three-dimensional magnetic field measurements in less than two minutes on a standard laptop.","Gaussian process regression; indoor localization; magnetic field maps; structured kernel interpolation","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-02-25","","","Team Manon Kok","","",""
"uuid:65398b60-dc88-4d78-9f16-3f1408b1fa33","http://resolver.tudelft.nl/uuid:65398b60-dc88-4d78-9f16-3f1408b1fa33","Toward automatic generation of control structures for process flow diagrams with large language models","Hirtreiter, E.J. (TU Delft ChemE/Product and Process Engineering); Schulze Balhorn, L. (TU Delft ChemE/Product and Process Engineering); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering)","","2023","Developing Piping and Instrumentation Diagrams (P&IDs) is a crucial step during process development. We propose a data-driven method for the prediction of control structures. Our methodology is inspired by end-to-end transformer-based human language translation models. We cast the control structure prediction as a translation task where Process Flow Diagrams (PFDs) without control structures are translated to PFDs with control structures. We represent the topology of PFDs as strings using the SFILES 2.0 notation. We pretrain our model using generated PFDs to learn the grammatical structure. Thereafter, the model is fine-tuned leveraging transfer learning on real PFDs. The model achieved a top-5 accuracy of 74.8% on 10,000 generated PFDs and 89.2% on 100,000 generated PFDs. These promising results show great potential for AI-assisted process engineering. The tests on a dataset of 312 real PFDs indicate the need for a larger PFD dataset for industry applications and hybrid artificial intelligence solutions.","artificial intelligence; control structure; deep learning; machine Learning; piping and instrumentation diagram; process flow diagram; transformer language model","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:5c540f8e-3583-47c0-a6c3-ddeeb5db53c2","http://resolver.tudelft.nl/uuid:5c540f8e-3583-47c0-a6c3-ddeeb5db53c2","BoundED: Neural boundary and edge detection in 3D point clouds via local neighborhood statistics","Bode, Lukas (Universität Bonn); Weinmann, M. (TU Delft Computer Graphics and Visualisation); Klein, Reinhard (Universität Bonn)","","2023","Extracting high-level structural information from 3D point clouds is challenging but essential for tasks like urban planning or autonomous driving requiring an advanced understanding of the scene at hand. Existing approaches are still not able to produce high-quality results consistently while being fast enough to be deployed in scenarios requiring interactivity. We propose to utilize a novel set of features describing the local neighborhood on a per-point basis via first and second order statistics as input for a simple and compact classification network to distinguish between non-edge, sharp-edge, and boundary points in the given data. Leveraging this feature embedding enables our algorithm to outperform the state-of-the-art technique PCEDNet in terms of quality and processing time while additionally allowing for the detection of boundaries in the processed point clouds.","Boundary detection; Classification; Edge detection; Machine learning; Neural network; Point cloud processing","en","journal article","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:bc6c7114-cd0d-421f-837f-8c8833ff7129","http://resolver.tudelft.nl/uuid:bc6c7114-cd0d-421f-837f-8c8833ff7129","Thermally self-sufficient process for cleaner production of e-methanol by CO2 hydrogenation","Vaquerizo, L. (TU Delft ChemE/Product and Process Engineering; University of Valladolid); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","","2023","The hydrogenation of CO2 to methanol is a technology that converts a greenhouse gas into a valuable chemical compound that efficiently stores energy. Several alternatives to perform this process have been proposed, but they are either not thermally self-sufficient and depend on using external fuel, or the power usage per ton of methanol is insufficiently optimized, or part of the raw materials must be purged and therefore there is a loss of methanol yield. This original study aims to develop a novel thermally self-sufficient process for e-methanol production (at practically 100% yield along with water by-product of 0.37 kgwater/kgproduct) that only uses green electricity. The main innovation of the process is an effective thermally self-sufficient heat-integration scheme that only needs 0.0059 m3water/kgmethanol combined with using a dividing wall column to recover the unreacted CO2 and obtain high purity methanol. In addition, the pressure reduction in the reaction-separation loop is limited to the pressure drop of the circuit to minimize the overall green electricity use to only 656 kWh per ton methanol, resulting in net CO2 emissions of −1.13 kgCO2/kgMeOH or 0.78 kgCO2/kgMeOH when the plant operates with green or grey hydrogen and electricity, respectively. Finally, the operating pressure in the reactor is optimized at 65 bar to minimize the total annualized cost.","Dividing-wall column; Energy efficiency; Optimal process design; Process integration","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:7928b88b-3afc-4d5e-8499-7152ca5893f8","http://resolver.tudelft.nl/uuid:7928b88b-3afc-4d5e-8499-7152ca5893f8","Distributed multi-agent magnetic field norm SLAM with Gaussian processes","Viset, F.M. (TU Delft Team Manon Kok); Helmons, R.L.J. (TU Delft Offshore and Dredging Engineering; Norwegian University of Science and Technology (NTNU)); Kok, M. (TU Delft Team Manon Kok)","","2023","Accurately estimating the positions of multi-agent systems in indoor environments is challenging due to the lack of Global Navigation Satelite System (GNSS) signals. Noisy measurements of position and orientation can cause the integrated position estimate to drift without bound. Previous research has proposed using magnetic field simultaneous localization and mapping (SLAM) to compensate for position drift in a single agent. Here, we propose two novel algorithms that allow multiple agents to apply magnetic field SLAM using their own and other agents' measurements.Our first algorithm is a centralized approach that uses all measurements collected by all agents in a single extended Kalman filter. This algorithm simultaneously estimates the agents' position and orientation and the magnetic field norm in a central unit that can communicate with all agents at all times. In cases where a central unit is not available, and there are communication drop-outs between agents, our second algorithm is a distributed approach that can be employed.We tested both algorithms by estimating the position of magnetometers carried by three people in an optical motion capture lab with simulated odometry and simulated communication dropouts between agents. We show that both algorithms are able to compensate for drift in a case where single-agent SLAM is not. We also discuss the conditions for the estimate from our distributed algorithm to converge to the estimate from the centralized algorithm, both theoretically and experimentally. Our experiments show that, for a communication drop-out rate of 80%, our proposed distributed algorithm, on average, provides a more accurate position estimate than single-agent SLAM. Finally, we demonstrate the drift-compensating abilities of our centralized algorithm on a real-life pedestrian localization problem with multiple agents moving inside a building.","Distributed Kalman filters; Gaussian processes; Multi-agent; SLAM","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-02-25","","","Team Manon Kok","","",""
"uuid:d428d731-b557-40cd-bd7f-391b9ac04e35","http://resolver.tudelft.nl/uuid:d428d731-b557-40cd-bd7f-391b9ac04e35","Mapping the magnetic field using a magnetometer array with noisy input Gaussian process regression","Edridge, T.I. (TU Delft Team Manon Kok); Kok, M. (TU Delft Team Manon Kok)","","2023","Ferromagnetic materials in indoor environments give rise to disturbances in the ambient magnetic field. Maps of these magnetic disturbances can be used for indoor localisation. A Gaussian process can be used to learn the spatially varying magnitude of the magnetic field using magnetometer measurements and information about the position of the magnetometer. The position of the magnetometer, however, is frequently only approximately known. This negatively affects the quality of the magnetic field map. In this paper, we investigate how an array of magnetometers can be used to improve the quality of the magnetic field map. The position of the array is approximately known, but the relative locations of the magnetometers on the array are known. We include this information in a novel method to make a map of the ambient magnetic field. We study the properties of our method in simulation and show that our method improves the map quality. We also demonstrate the efficacy of our method with experimental data for the mapping of the magnetic field using an array of 30 magnetometers.","Gaussian process; magnetic field; mapping; noisy inputs; sensor array","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-02-25","","","Team Manon Kok","","",""
"uuid:838efaec-c792-42af-aee9-0afedb8aa801","http://resolver.tudelft.nl/uuid:838efaec-c792-42af-aee9-0afedb8aa801","An enhanced lattice beam element model for the numerical simulation of rate-dependent self-healing in cementitious materials","Sayadi, Sina (Cardiff University); Chang, Z. (TU Delft Materials and Environment); He, S. (TU Delft Materials and Environment); Schlangen, E. (TU Delft Materials and Environment); Mihai, I.C. (Cardiff University); Jefferson, Anthony (Cardiff University)","","2023","This paper describes the development of a discrete lattice model for simulating structures formed from self-healing cementitious materials. In particular, a new approach is presented for simulating time dependent mechanical healing in lattice elements. The proposed formulation is designed to simulate the transient damage and healing behaviour of structures under a range of loading conditions. In addition, multiple and overlapping damage and healing events are considered. An illustrative example demonstrates the effects of varying the healing agent curing parameters on the computed mechanical response. The model is successfully validated using published experimental data from two series of tests on structural elements with an embedded autonomic self-healing system. The meso-scale model gives detailed information on the size and disposition of cracking and healing zones throughout an analysis time history. The model also provides an accurate means of determining the volume of healing agent required to achieve healing for all locations within a structural element. The importance of the information provided by the model for the design of self-healing cementitious material elements is highlighted.","Fracture process; Lattice simulations; Numerical analysis; Self-healing","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:7b584a4b-7630-469a-9e88-031c1d816844","http://resolver.tudelft.nl/uuid:7b584a4b-7630-469a-9e88-031c1d816844","A novel mechanistic modelling approach for microbial selection dynamics: Towards improved design and control of raceway reactors for purple bacteria","Alloul, A. (TU Delft BT/Environmental Biotechnology; Universiteit Antwerpen); Moradvandi, A. (TU Delft Sanitary Engineering; TU Delft Delft Center for Systems and Control); Puyol, Daniel (Universidad Rey Juan Carlos); Molina, Raúl (Universidad Rey Juan Carlos); Gardella, G. (TU Delft Sanitary Engineering); Vlaeminck, Siegfried E. (Universiteit Antwerpen); De Schutter, B.H.K. (TU Delft Delft Center for Systems and Control); Abraham, E. (TU Delft Water Resources); Lindeboom, R.E.F. (TU Delft Laboratory Water Management); Weissbrodt, D.G. (TU Delft BT/Environmental Biotechnology; Norwegian University of Science and Technology (NTNU))","","2023","Purple phototrophic bacteria (PPB) show an underexplored potential for resource recovery from wastewater. Raceway reactors offer a more affordable full-scale solution on wastewater and enable useful additional aerobic processes. Current mathematical models of PPB systems provide useful mechanistic insights, but do not represent the full metabolic versatility of PPB and thus require further advancement to simulate the process for technology development and control. In this study, a new modelling approach for PPB that integrates the photoheterotrophic, and both anaerobic and aerobic chemoheterotrophic metabolic pathways through an empirical parallel metabolic growth constant was proposed. It aimed the modelling of microbial selection dynamics in competition with aerobic and anaerobic microbial community under different operational scenarios. A sensitivity analysis was carried out to identify the most influential parameters within the model and calibrate them based on experimental data. Process perturbation scenarios were simulated, which showed a good performance of the model.","Bio-process modelling; Physico-chemical process; Process control; Process design; Resource recovery","en","journal article","","","","","","","","","","Delft Center for Systems and Control","BT/Environmental Biotechnology","","",""
"uuid:5d3a50a5-2f39-4963-88ff-1112c98fce18","http://resolver.tudelft.nl/uuid:5d3a50a5-2f39-4963-88ff-1112c98fce18","New type of basket stationary bed reactor for heterogeneous biocatalysis","Stradomska, Dominika (Silesian University of Technology); Świętochowska, Daria (Silesian University of Technology); Kubica, Robert (Silesian University of Technology); Hanefeld, U. (TU Delft BT/Biocatalysis); Szymańska, Katarzyna (Silesian University of Technology)","","2023","Effectiveness of catalytic processes using heterogeneous biocatalysts depends not only on the activity of the enzyme, but also on the efficiency of the used reactor. In this paper, we present a novel design of a basket reactor with a stationary catalyst bed (StatBioChem). The developed design was compared to a commercially available rotating bed reactor (SpinChem®). The biocatalysts used were invertase and acyltransferase from Mycobacterium smegmatis (MsAcT) immobilised on macroporous silica supports. The obtained values of initial reaction rate, both in the reaction of saccharose hydrolysis and 2,2-dimethyl-1,3-propanediol (NPG) transesterification, were twice higher for the StatBioChem reactor. A similar relationship was also observed regarding the process efficiency expressed as STY.","A novel design of reactor; Heterogeneous biocatalysts; Intensification of mass and heat transfer; Process intensification; Rotating bed reactor; Stationary bed reactor","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public","","2024-04-26","","","BT/Biocatalysis","","",""
"uuid:5a6e2c4e-6d51-4eb0-b2f8-31b0daa892b4","http://resolver.tudelft.nl/uuid:5a6e2c4e-6d51-4eb0-b2f8-31b0daa892b4","Effects of thermal hydrolysis process-generated melanoidins on partial nitritation/anammox in full-scale installations treating waste activated sludge","Pavez Jara, J.A. (TU Delft Sanitary Engineering); van Lier, J.B. (TU Delft Sanitary Engineering); de Kreuk, M.K. (TU Delft Water Management)","","2023","Thermal hydrolysis process (THP) is a well-established anaerobic digestion (AD) pre-treatment technology. Despite the THP benefits the pre-treatment increases the concentrations of nutrients and melanoidins in the digestate reject water after dewatering. The increased concentrations of nutrients and melanoidins formed during THP-AD can impact downstream processes, such as struvite precipitation and partial nitritation/anammox (PN/A). In our present work, six full-scale PN/A influents and effluents were sampled in The Netherlands (4 with THP and 2 without THP). Full-scale samples were characterised and the stoichiometric O2 consumption and melanoidins chelated to trace elements were analysed. The results showed that THP increased the concentration of total ammoniacal nitrogen (TAN), chemical oxygen demand (COD), total organic carbon (TOC), UVA 254 and colour, which are indicators of melanoidins occurrence. THP furthermore decreased the stoichiometric NO3−-N production from the PN/A reaction in effluents. The disparity between stoichiometric and measured NO3− -N in the THP-using plants was explained by the proliferation of denitrifiers. Moreover, denitrification improved the N removal efficiency due to the consumption of the stoichiometrically-produced NO3− -N. Also, the stoichiometric O2 consumption increased in the plants using THP, reaching up to 56% of the O2 used for partial oxidation of TAN. Trace elements analysis revealed that the plants with elevated concentrations of melanoidins in the effluent showed a high percentage of chelated multivalent cations, particularly transition metals such as Fe. Kendall correlation coefficient analysis showed that the chelation of multivalent cations was correlated mainly with colour occurrence in the reject waters. Overall, the results indicated that in PN/A systems using THP-AD increased O2 consumption and trace elements availability should be considered during the process design.","Ammonium oxidising organisms; Anammox; Complexation; Humic substances; Melanoidins; Thermal hydrolysis process","en","journal article","","","","","","","","","","Water Management","Sanitary Engineering","","",""
"uuid:4e8310b5-bee3-46f0-b77d-b40a2adb93cb","http://resolver.tudelft.nl/uuid:4e8310b5-bee3-46f0-b77d-b40a2adb93cb","Improving the strength-ductility balance of medium-Mn Q&P steel by controlling cold-worked ferrite microstructure","Li, Jiayu (Northeastern University; Universiteit Gent); Xu, Yunbo (Northeastern University); Jing, Yi (Northeastern University); Gao, Yijing (Northeastern University); Liu, Hongliang (Technology Research Institute of Bengang Steel Plates Co.); Yu, Yongmei (Shenyang University of Chemical Technology); Banis, Alexandros (Universiteit Gent); Kestens, L.A.I. (TU Delft Team Maria Santofimia Navarro; Universiteit Gent); Petrov, R.H. (TU Delft Team Maria Santofimia Navarro; Universiteit Gent)","","2023","The phase transformations, microstructure and properties of two Medium-Mn processed via quenching and partitioning steels were compared in this contribution and a new strategy for controlling mechanical properties by introducing and controlling cold-worked ferrite prior to heat treatment is proposed. It was found that during heating, the recovery and recrystallization of cold-worked ferrite compete with austenitization, thereby inhibiting the coarsening of austenite. The cold-worked ferrite interface will significantly delay the austenitization kinetics during the partitioning local equilibrium stage compared to martensite. These results lead to a diverse parent austenite, as well as a refined martensite substructure. As a result, the randomly distributed variants increase the number of effective grain boundaries, thus enhancing yield strength. The intercritical annealing process at a temperature of 860 °C resulted in the formation of fresh martensite-retained austenite (M/RA) constituents exhibiting a remarkably fine (<2 μm) and uniform grain morphology. Such microstructure yielded substantial improvement in both the strength and ductility of the steel. The proposed treatment led to excellent elongation (24%) at fracture, combined with very high ultimate tensile strength and yield strength of 1345 MPa and 1163 MPa, respectively, of the steel, resulting in a product of strength and elongation that exceed 32 GPa%.","Austenitization kinetics; Cold-worked ferrite; Medium‑manganese steel; Microstructure and mechanical properties; Quenching and partitioning process","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-09","","","Team Maria Santofimia Navarro","","",""
"uuid:3a1578ce-3739-4dba-bc90-2bcc4105c5c6","http://resolver.tudelft.nl/uuid:3a1578ce-3739-4dba-bc90-2bcc4105c5c6","Regularity theory for a new class of fractional parabolic stochastic evolution equations","Kirchner, K. (TU Delft Analysis); Willems, J. (TU Delft Analysis)","","2023","A new class of fractional-order parabolic stochastic evolution equations of the form (∂t+A)γX(t)=W˙Q(t) , t∈ [0 , T] , γ∈ (0 , ∞) , is introduced, where - A generates a C -semigroup on a separable Hilbert space H and the spatiotemporal driving noise W˙ Q is the formal time derivative of an H-valued cylindrical Q-Wiener process. Mild and weak solutions are defined; these concepts are shown to be equivalent and to lead to well-posed problems. Temporal and spatial regularity of the solution process X are investigated, the former being measured by mean-square or pathwise smoothness and the latter by using domains of fractional powers of A. In addition, the covariance of X and its long-time behavior are analyzed. These abstract results are applied to the cases when A: = Lβ and Q: = L~ -α are fractional powers of symmetric, strongly elliptic second-order differential operators defined on (i) bounded Euclidean domains or (ii) smooth, compact surfaces. In these cases, the Gaussian solution processes can be seen as generalizations of merely spatial (Whittle–)Matérn fields to space–time.","Matérn covariance; Mean-square differentiability; Mild solution; Nonlocal space–time differential operators; Spatiotemporal Gaussian processes; Strongly continuous semigroups","en","journal article","","","","","","","","","","","Analysis","","",""
"uuid:289d4f21-8065-4f10-9404-b948179f38fb","http://resolver.tudelft.nl/uuid:289d4f21-8065-4f10-9404-b948179f38fb","Got Whey? Sustainability Endpoints for the Dairy Industry through Resource Biorecovery","Giulianetti de Almeida, M.P. (TU Delft BT/Environmental Biotechnology; University of Campinas); Mockaitis, Gustavo (University of Campinas); Weissbrodt, D.G. (TU Delft BT/Environmental Biotechnology; Norwegian University of Science and Technology (NTNU))","","2023","Whey has applications in food, beverages, personal care products, pharmaceuticals, and the medical sector. However, it remains a massive dairy residue worldwide (160.7 million m3 year−1), with high organic and nutrient loads. About 42% is used for low-value products such as animal feed and fertilizers or is even directly discharged into water streams, leading to ecosystem damage via eutrophication. We reviewed the uses and applications of cheese whey, along with associated environmental impacts and innovative ways to mitigate them using affordable and scalable technologies. Recycling and repurposing whey remain challenges for remote locations and poor communities with limited access to expensive technology. We propose a closed-loop biorefinery strategy to simultaneously mitigate environmental impacts and valorize whey resources. Anaerobic digestion utilizes whey to produce biogas and/or carboxylates. Alternative processes combining anaerobic digestion and low-cost open photobioprocesses can valorize whey and capture organic, nitrogenous, and phosphorous nutrients into microalgal biomass that can be used as food and crop supply or processed into biofuels, pigments, and antioxidants, among other value-added products. The complete valorization of cheese whey also depends on facilitating access to relevant information on whey production, identifying stakeholders, reducing technology gaps among countries, enforcing legislation and compliance, and creating subsidies and fostering partnerships with industries and between countries.","anaerobic processes; cheese whey; circular economy; food waste; microalgae","en","review","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:13e2456b-b643-4a1f-86d3-86fd47db701d","http://resolver.tudelft.nl/uuid:13e2456b-b643-4a1f-86d3-86fd47db701d","Augmenting Ridership Data with Social Media Data to Analyse the Long-term Effect of COVID-19 on Public Transport","Xu, Y. (TU Delft Transport and Planning); Krishnakumari, P.K. (TU Delft Transport and Planning); Yorke-Smith, N. (TU Delft Algorithmics); Hoogendoorn, S.P. (TU Delft Transport and Planning)","","2023","COVID-19 significantly influenced travel behaviours and public attitudes towards public transport. Various studies have illustrated complicated factors related to long-term travel behaviour, indicating difficulty in understanding and predicting post-pandemic long-term travel behaviour via traditional methods. In these complex circumstances, it is valuable to take advantage of social media data to obtain real-time public opinions to understand dynamic travel behaviour changes from the passenger perspective. The present study provides a means - leveraging Twitter data and state-of-art Natural Language Processing (NLP) technologies - to interpret the underlying associations among public attitude, COVID-19 trends and public travel behaviour. Concretely, New York City has been selected due to its dependence on public transit for daily commuting. More than 500K tweets have been collected from January 2019 to June 2022. Automated text mining, topic modelling, and sentiment analysis have been implemented in these contexts to identify dynamic public reactions. A consistently negative attitude to public transit is detected and five main topics, including derivative topics from COVID-19, are discovered within the COVID-19 duration. Policy makers and transit managers can use these topics to take onboard the public's concerns. The paper thus exemplifies how social media data and NLP technologies can support policy-making progress and can benefit other tasks in the transportation domain.","COVID-19; natural language processing; public transport travel behaviour; sentiment analysis; social media; topic modelling","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-16","","Transport and Planning","Transport and Planning","","",""
"uuid:f03a92f6-37ee-4ac5-97cd-2501401fd19f","http://resolver.tudelft.nl/uuid:f03a92f6-37ee-4ac5-97cd-2501401fd19f","A novel lattice model to predict chloride diffusion coefficient of unsaturated cementitious materials based on multi-typed pore structure characteristics","Tong, Liang-yu (Shanghai Jiao Tong University); Xiong, Qing Xiang (Shanghai Jiao Tong University); Zhang, Zhidong (ETH Zürich); Chen, Xiangsheng (Shenzhen University); Ye, G. (TU Delft Materials and Environment); Liu, Qing feng (Shanghai Jiao Tong University)","","2023","This paper develops a novel lattice diffusive model to quantitatively study the chloride diffusion coefficient in unsaturated cementitious materials, in which the pore voxels are redistributed to make a better representation of a real microstructure of hardened cement paste. Considering the hierarchical microstructure and different drying-wetting cycles, water distributions in multiscale pore structures are modelled and the structure characteristics of water-filled pores, including water connectivity, water tortuosity and effective porosity, are computationally extracted based on that. A lattice diffusion network is established to predict relative chloride diffusion coefficient by combining the effect of both water saturation degree and pore structure characteristics. The predicted results are validated against experimental data, and a concise analytical equation is proposed to predict the relative chloride diffusion coefficient. The equation indicated that the relative chloride diffusion coefficient is proportional to water connectivity but inversely proportional to the square of water tortuosity. Besides, the lattice model's quantitative results reveal that the water connectivity and water tortuosity are highly related to pre-water loading processes, and influenced by the gel pore fraction, which in turn will affect the relative chloride diffusion coefficient. Compared with existing equations and non-redistributed models, the present model could improve the prediction accuracy significantly.","Cementitious materials; Chloride diffusion; Drying-wetting process; Lattice model; Pore structure","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-07","","","Materials and Environment","","",""
"uuid:69fff7d3-cc60-4e5c-bacf-c760db05f7ed","http://resolver.tudelft.nl/uuid:69fff7d3-cc60-4e5c-bacf-c760db05f7ed","Estimating process noise variance of PPP-RTK corrections: a means for sensing the ionospheric time-variability","Sadegh Nojehdeh, Parvaneh (University of Melbourne); Khodabandeh, Amir (University of Melbourne); Khoshelham, Kourosh (University of Melbourne); Amiri Simkooei, A. (TU Delft Optical and Laser Remote Sensing)","","2023","The provision of accurate ionospheric corrections in PPP-RTK enormously improves the performance of single-receiver user integer ambiguity resolution (IAR), thus enabling fast high precision positioning. While an external provider can disseminate such corrections to the user with a time delay, it is the task of the user to accurately time-predict the corrections so that they become applicable to the user positioning time. Accurate time prediction of the corrections requires a dynamic model in which the process noise of the corrections has to be correctly specified. In this contribution, we present an estimation method to determine the process noise variance of PPP-RTK corrections using single-receiver GNSS data. Our focus is on variance estimation of the first-order slant ionospheric delays, which allows one to analyze how the ionospheric process noise changes as a function of the solar activity, receiver local time, and receiver geographic latitude. By analyzing 11-year GNSS datasets, it is illustrated that estimates of the ionospheric process noise are strongly correlated with the solar flux index F10.7. These estimates also indicate a seasonal variation, with the highest level of variation observed during the spring and autumn equinoxes.","Correction latency; Ionospheric time-variability; Precise point positioning-real-time kinematic (PPP-RTK); Process noise variance-estimation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-06-10","","","Optical and Laser Remote Sensing","","",""
"uuid:817d1d76-3a2d-4487-b90f-366edbaafe85","http://resolver.tudelft.nl/uuid:817d1d76-3a2d-4487-b90f-366edbaafe85","Seeing the Past, Planning the Future: Proudly Celebrating 25 Years of Assisting the Convergence of Process Sciences and Design Science","Horvath, I. (TU Delft Cyber-Physical Systems); Wan, Thomas T.H. (University of Central Florida); Huang, Jingwei (University of Texas Southwestern); Coatanea, Eric (Tampere University); Rayz, Julia M. (Purdue University); Zeng, Yong (Concordia University); Kim, Kyoung Yun (Wayne State University)","","2023","This Extended Editorial has been compiled by the members of the Editorial Board to celebrate the 25th anniversary of the establishment of the Journal of Integrated Design and Process Science, which operates as the Transactions of the Society for Process and Design Science. The paper divides in three parts. The first part provides a detailed overview of the preliminaries, the objectives, and the periods of operation. It also includes a summary of the current application-orientated professional fields of interests, which are: (i) convergence mechanisms of creative scientific disciplines, (ii) convergence of artificial intelligence, team and health science, (iii) convergence concerning next-generation cyber-physical systems, and (iv) convergence in design and engineering education. The second part includes invited papers, which exemplify domains within the four fields of interest, and also represent good examples of science communication. Short synopses of the contents of these representative papers are included. The third part takes the major changes in scientific research and the academic publication arena into consideration, circumscribes the mission and vision as formulated by the current Editorial Board, and elaborates on the planned strategic exploration and utilization domains of interest.","Anniversary special issue; current profile; future perspectives and concerns; Journal of Integrated Design and Process Science; objectives and history; Society for Design and Process Science","en","review","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-13","","","Cyber-Physical Systems","","",""
"uuid:e9589a90-5d02-46b5-8d45-4fa0fb8998ed","http://resolver.tudelft.nl/uuid:e9589a90-5d02-46b5-8d45-4fa0fb8998ed","Robotic knitcrete: Computational design and fabrication of a pedestrian bridge using robotic shotcrete on a 3D-Knitted formwork","Rennen, Philipp (Technical University of Braunschweig); Gantner, Stefan (Technical University of Braunschweig); Dielemans, Gido (Technische Universität München); Bleker, Lazlo (Technische Universität München); Christidi, N. (TU Delft Applied Mechanics); Dörrie, Robin (Technical University of Braunschweig); Hojjat, Majid (BMW Group; Technische Universität München); Mai, Inka (Technical University of Braunschweig; Technical University of Berlin); Popescu, M.A. (TU Delft Applied Mechanics)","","2023","The research project presented here aims to develop a design-informed manufacturing process for complex concrete shell structures in additive manufacturing and thus overcome limitations of traditional construction methods such as formwork- and labor intensity. To achieve this, an effort was made to merge the two technologies of CNC knitted stay-in-place formwork, known as KnitCrete, and robotically applied shotcrete, known as Shotcrete 3D Printing (SC3DP), and thereby reduce their respective limitations. The proposed workflow unites both digital fabrication methods into a seamless process that additionally integrates computational form finding, robotically applied fiber reinforcement, CNC post processing and geometric quality verification to ensure precision and efficiency. As part of a cross-university, research-based teaching format, this concept was implemented in the construction of a full-scale pedestrian bridge, which served as a demonstrator to evaluate the capabilities and limitations of the process. While overcoming some challenges during the process, the successful prove of concept shows a significant leap in digital fabrication of complex concrete geometry, reducing reliance on labor-intensive methods. The results shown in this paper make this fabrication approach a promising starting point for further developments in additive manufacturing in the construction sector.","additive manufacturing in construction; digital fabrication; flexible formwork; green-state post-processing; knitcrete; robotic fiber winding; shotcrete 3D printing; stay-in-place formwork","en","journal article","","","","","","","","","","","Applied Mechanics","","",""
"uuid:e1392fa0-a004-4903-9122-e3654c18ea1a","http://resolver.tudelft.nl/uuid:e1392fa0-a004-4903-9122-e3654c18ea1a","Using artificial neural networks to accelerate flowsheet optimization for downstream process development","Keulen, D. (TU Delft BT/Bioprocess Engineering); van der Hagen, Erik (Student TU Delft); Geldhof, Geoffroy (GSK Vaccines, Rixensart); Le Bussy, Olivier (GSK Vaccines, Rixensart); Pabst, Martin (TU Delft BT/Environmental Biotechnology); Ottens, M. (TU Delft BT/Design and Engineering Education)","","2023","An optimal purification process for biopharmaceutical products is important to meet strict safety regulations, and for economic benefits. To find the global optimum, it is desirable to screen the overall design space. Advanced model-based approaches enable to screen a broad range of the design-space, in contrast to traditional statistical or heuristic-based approaches. Though, chromatographic mechanistic modeling (MM), one of the advanced model-based approaches, can be speed-limiting for flowsheet optimization, which evaluates every purification possibility (e.g., type and order of purification techniques, and their operating conditions). Therefore, we propose to use artificial neural networks (ANNs) during global optimization to select the most optimal flowsheets. So, the number of flowsheets for final local optimization is reduced and consequently the overall optimization time. Employing ANNs during global optimization proved to reduce the number of flowsheets from 15 to only 3. From these three, one flowsheet was optimized locally and similar final results were found when using the global outcome of either the ANN or MM as starting condition. Moreover, the overall flowsheet optimization time was reduced by 50% when using ANNs during global optimization. This approach accelerates the early purification process design; moreover, it is generic, flexible, and regardless of sample material's type.","artificial neural networks; chromatography; downstream process development; flowsheet optimization; mechanistic modeling; model-based process optimization","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:877b1e4b-8104-41a5-a267-77305606ffae","http://resolver.tudelft.nl/uuid:877b1e4b-8104-41a5-a267-77305606ffae","Holistic computational design within additive manufacturing through topology optimization combined with multiphysics multi-scale materials and process modelling","Bayat, Mohamad (Technical University of Denmark); Zinovieva, Olga (University of New South Wales Canberra); Ferrari, Federico (Technical University of Denmark); Ayas, C. (TU Delft Computational Design and Mechanics); Langelaar, Matthijs (TU Delft Computational Design and Mechanics); Spangenberg, Jon (Technical University of Denmark); Salajeghe, Roozbeh (Technical University of Denmark); Poulios, Konstantinos (Technical University of Denmark); Mohanty, Sankhya (Technical University of Denmark); Sigmund, Ole (Technical University of Denmark); Hattel, Jesper (Technical University of Denmark)","","2023","Additive manufacturing (AM) processes have proven to be a perfect match for topology optimization (TO), as they are able to realize sophisticated geometries in a unique layer-by-layer manner. From a manufacturing viewpoint, however, there is a significant likelihood of process-related defects within complex geometrical features designed by TO. This is because TO seldomly accounts for process constraints and conditions and is typically perceived as a purely geometrical design tool. On the other hand, advanced AM process simulations have shown their potential as reliable tools capable of predicting various process-related conditions and defects. Thus far, geometry design by topology optimization and multiphysics manufacturing simulations have been viewed as two mostly separate paradigms, whereas one should really conceive them as one holistic computational design tool. More specifically, AM process models provide input to physics-based TO, where consequently, not only the designed component will function optimally, but also will have near-to-minimum manufacturing defects. In this regard, we aim at giving a thorough overview of holistic computational design tool concepts applied within AM. First, literature on TO for performance optimization is reviewed and then the most recent developments within physics-based TO techniques related to AM are covered. Process simulations play a pivotal role in the latter type of TO and serve as additional constraints on top of the primary end-user optimization objectives. As a natural consequence of this, a comprehensive and detailed review of non-metallic and metallic additive manufacturing simulations is performed, where the latter is divided into micro-scale and deposition-scale simulations. Material multi-scaling techniques, which are central to the process-structure-property relationships, are reviewed next, followed by a subsection on process multi-scaling techniques, which are reduced-order versions of advanced process models and are incorporable into physics-based TO due to their lower computational requirements. Finally the paper is concluded and suggestions for further research paths discussed.","Additive manufacturing; Multiphysics simulation; Process multi-scaling; Process-structure-property; Topology optimization","en","review","","","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:26c0c979-5676-4119-853c-82fcedf655b4","http://resolver.tudelft.nl/uuid:26c0c979-5676-4119-853c-82fcedf655b4","Why do major chemical accidents still happen in China: Analysis from a process safety management perspective","Bai, Mingqi (China University of Petroleum (East China)); Qi, Meng (China University of Petroleum (East China)); Shu, Chi Min (National Yunlin University of Science and Technology); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); Khan, Faisal (Texas A and M University); Chen, Chao (Southwest Petroleum University); Liu, Yi (China University of Petroleum (East China))","","2023","As an important consideration in the chemical industry, chemical process safety has received notable attention in China. However, catastrophic chemical accidents still occur. To better understand why accidents continue to occur, this paper presented a diagnostic analysis of 14 major chemical accidents in China from 2012 to 2022 based on VOSviewer software. The authors analysed the correlation between the accident causation and their relationship with the safety management elements. The study observed that inferior process safety culture, intentional violation (rule-breaking) of procedure, inadequate safety training, and illegal operations were the most frequent causes of accidents. These causes highlighted the prominent gaps in PSM in China in the process safety culture, compliance with standards, the conduct of operations, process safety competency, and training & performance assurance. The results based on co-occurrence analysis indicated a strong correlation between these gaps in PSM. Enterprises should pay attention to collaborative management among them. These deficiencies in the enterprise's PSM system showed that the essential defects in China's chemical industry are a poor safety culture, inadequate accident investigation, inadequate training, and a lack of chemical safety personnel. The study recommended that the chemical industry establish superior process safety culture and competency for all personnel, monitor leading and lagging process safety indicators, apply inherent safety, and practice advanced safety management concepts. We hope that the findings can provide China's perspectives and strengths for global chemical safety.","Accident causation; Accident investigation; Chemical process safety; Process safety culture; Safety management elements","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-16","","","Safety and Security Science","","",""
"uuid:e66401c1-0e70-4e42-83af-d1f9802c6436","http://resolver.tudelft.nl/uuid:e66401c1-0e70-4e42-83af-d1f9802c6436","Real-time detection of mAb aggregates in an integrated downstream process","Neves Sao Pedro, M. (TU Delft BT/Bioprocess Engineering); Isaksson, Madelène (Lund University); Gomis-Fons, Joaquín (Lund University); Eppink, Michel H.M. (Wageningen University & Research; Byondis B.V., Nijmegen); Nilsson, Bernt (Lund University); Ottens, M. (TU Delft BT/Design and Engineering Education)","","2023","The implementation of continuous processing in the biopharmaceutical industry is hindered by the scarcity of process analytical technologies (PAT). To monitor and control a continuous process, PAT tools will be crucial to measure real-time product quality attributes such as protein aggregation. Miniaturizing these analytical techniques can increase measurement speed and enable faster decision-making. A fluorescent dye (FD)-based miniaturized sensor has previously been developed: a zigzag microchannel which mixes two streams under 30 s. Bis-ANS and CCVJ, two established FDs, were employed in this micromixer to detect aggregation of the biopharmaceutical monoclonal antibody (mAb). Both FDs were able to robustly detect aggregation levels starting at 2.5%. However, the real-time measurement provided by the microfluidic sensor still needs to be implemented and assessed in an integrated continuous downstream process. In this work, the micromixer is implemented in a lab-scale integrated system for the purification of mAbs, established in an ÄKTA™ unit. A viral inactivation and two polishing steps were reproduced, sending a sample of the product pool after each phase directly to the microfluidic sensor for aggregate detection. An additional UV sensor was connected after the micromixer and an increase in its signal would indicate that aggregates were present in the sample. The at-line miniaturized PAT tool provides a fast aggregation measurement, under 10 min, enabling better process understanding and control.","antibody aggregation; continuous biomanufacturing; fluorescent dyes; microfluidic sensor; process analytical technology (PAT)","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:5f4eab1e-d7c3-4f59-abbb-8d615ba5e478","http://resolver.tudelft.nl/uuid:5f4eab1e-d7c3-4f59-abbb-8d615ba5e478","Process model analysis of parenchyma sparing laparoscopic liver surgery to recognize surgical steps and predict impact of new technologies","Gholinejad, M. (TU Delft Medical Instruments & Bio-Inspired Technology); Edwin, Bjørn (Oslo University Hospital); Elle, Ole Jakob (Oslo University Hospital; Universitetet i Oslo); Dankelman, J. (TU Delft Medical Instruments & Bio-Inspired Technology); Loeve, A.J. (TU Delft Medical Instruments & Bio-Inspired Technology)","","2023","Background: Surgical process model (SPM) analysis is a great means to predict the surgical steps in a procedure as well as to predict the potential impact of new technologies. Especially in complicated and high-volume treatments, such as parenchyma sparing laparoscopic liver resection (LLR), profound process knowledge is essential for enabling improving surgical quality and efficiency. Methods: Videos of thirteen parenchyma sparing LLR were analyzed to extract the duration and sequence of surgical steps according to the process model. The videos were categorized into three groups, based on the tumor locations. Next, a detailed discrete events simulation model (DESM) of LLR was built, based on the process model and the process data obtained from the endoscopic videos. Furthermore, the impact of using a navigation platform on the total duration of the LLR was studied with the simulation model by assessing three different scenarios: (i) no navigation platform, (ii) conservative positive effect, and (iii) optimistic positive effect. Results: The possible variations of sequences of surgical steps in performing parenchyma sparing depending on the tumor locations were established. The statistically most probable chain of surgical steps was predicted, which could be used to improve parenchyma sparing surgeries. In all three categories (i–iii) the treatment phase covered the major part (~ 40%) of the total procedure duration (bottleneck). The simulation results predict that a navigation platform could decrease the total surgery duration by up to 30%. Conclusion: This study showed a DESM based on the analysis of steps during surgical procedures can be used to predict the impact of new technology. SPMs can be used to detect, e.g., the most probable workflow paths which enables predicting next surgical steps, improving surgical training systems, and analyzing surgical performance. Moreover, it provides insight into the points for improvement and bottlenecks in the surgical process.","Discrete event simulation; Parenchyma sparing; Surgical data analysis; Surgical navigation platform; Surgical process modeling; Surgical step prediction; Surgical task recognition","en","journal article","","","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:aabf36b1-ad0b-4c6f-af69-90331fddc6a2","http://resolver.tudelft.nl/uuid:aabf36b1-ad0b-4c6f-af69-90331fddc6a2","Self-Supervised Learning for Enhancing Angular Resolution in Automotive MIMO Radars","Roldan Montero, I. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2023","A novel framework to enhance the angular resolution of automotive radars is proposed. An approach to enlarge the antenna aperture using artificial neural networks is developed using a self-supervised learning scheme. Data from a high angular resolution radar, i.e., a radar with a large antenna aperture, is used to train a deep neural network to extrapolate the antenna element's response. Afterward, the trained network is used to enhance the angular resolution of compact, low-cost radars. One million scenarios are simulated in a Monte-Carlo fashion, varying the number of targets, their Radar Cross Section (RCS), and location to evaluate the method's performance. Finally, the method is tested in real automotive data collected outdoors with a commercial radar system. A significant increase in the ability to resolve targets is demonstrated, which can translate to more accurate and faster responses from the planning and decision-making system of the vehicle.","angular resolution; Antenna arrays; Automotive engineering; automotive radar; machine learning; MIMO; MIMO radar; neural networks; Radar; Radar antennas; Radar cross-sections; radar signal processing; Receiving antennas","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-03-19","","","Microwave Sensing, Signals & Systems","","",""
"uuid:e3cac856-57c7-4dc1-8228-91c4c49d6098","http://resolver.tudelft.nl/uuid:e3cac856-57c7-4dc1-8228-91c4c49d6098","Towards more credible models in catchment hydrology to enhance hydrological process understanding: Preface","Refsgaard, Jens Christian (Geological Survey of Denmark and Greenland); Mai, Juliane (University of Waterloo; Helmholtz Centre for Environmental Research–UFZ; Center for Scalable Data Analytics and Artificial Intelligence); Hrachowitz, M. (TU Delft Water Resources); Jain, Sharad K. (Indian Institute of Technology Roorkee); Stisen, Simon (Geological Survey of Denmark and Greenland)","","2023","Catchment modelling has undergone tremendous developments during the past decades. In the 1970s, the focus was on simulation of catchment runoff with process descriptions and data inputs being lumped to the catchment scale. Later developments included spatially distributed models allowing data inputs and hydrological processes to be simulated at model grid scale, that is, much finer than catchment scale. These models were able to explicitly simulate various processes such as soil moisture, evapotranspiration, groundwater and surface runoff. With the advancements in remote sensing technology and availability of high-resolution data, increased attention has in recent years been given to enhancing the capability of catchment models to reproduce spatial patterns and in this way improve our understanding of hydrological processes and the physical realism of catchment models. This development process has involved a wide spectrum of different aspects in the modelling process, reaching from an improved understanding of uncertainties in data, model parameters and model structures to new protocols for good modelling practices in water management. Recognizing the important role of biodiversity and social aspects, hydrologists are now extending the scope of their models to capture the interactions between water, biota and human social systems.
This special issue (SI) of hydrological processes is the result of an open call for abstracts announced in October 2020. The SI comprises a collection of 14 papers authored and co-authored by 77 scientists from 37 research institutions in 16 countries. Based on the key focus for each of the papers we have grouped them into five thematic topics: (i) review papers; (ii) papers developing and testing new process descriptions; (iii) papers focusing on how model calibration can improve process descriptions; (iv) papers exploring how the use of multiple model structures can improve model performance and process descriptions; and (v) papers focusing on modelling uncertainties. The grouping of the papers into the five topics should be considered as indicative only, because all papers address more than one of the five themes. The key findings in the papers of this Special Issue are summarized in the following five topic sections.","calibration; evaluation; hydrological modelling; model structure; process description; uncertainty","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-03-19","","","Water Resources","","",""
"uuid:9a69967b-366e-4f6a-bdbc-9f7fc9f71c67","http://resolver.tudelft.nl/uuid:9a69967b-366e-4f6a-bdbc-9f7fc9f71c67","Towards establishing an automated selection framework for underwater image enhancement methods","Ilioudi, A. (TU Delft Team Bart De Schutter); Wolf, B.J. (University Medical Center Groningen); Dabiri, A. (TU Delft Team Azita Dabiri); De Schutter, B.H.K. (TU Delft Delft Center for Systems and Control)","","2023","The majority of computer vision architectures are developed based on the assumption of the availability of good quality data. However, this is a particularly hard requirement to achieve in underwater conditions. To address this limitation, plenty of underwater image enhancement methods have received considerable attention during the last decades, but due to the lack of a commonly accepted framework to systematically evaluate them and to determine the likely optimal one for a given image, their adoption in practice is hindered, since it is not clear which one can achieve the best results. In this paper, we propose a standardized selection framework to evaluate the quality of an underwater image and to estimate the most suitable image enhancement technique based on its impact on the image classification performance.","computer vision; image processing; underwater image enhancement","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-03-12","","Delft Center for Systems and Control","Team Bart De Schutter","","",""
"uuid:4601c636-ed53-4e45-b8b7-39eaf43cdefa","http://resolver.tudelft.nl/uuid:4601c636-ed53-4e45-b8b7-39eaf43cdefa","SFILES 2.0: an extended text-based flowsheet representation","Vogel, G.C. (TU Delft ChemE/Product and Process Engineering); Hirtreiter, E.J. (TU Delft ChemE/Product and Process Engineering); Schulze Balhorn, L. (TU Delft ChemE/Product and Process Engineering); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering)","","2023","SFILES are a text-based notation for chemical process flowsheets. They were originally proposed by d’Anterroches (Process flow sheet generation & design through a group contribution approach) who was inspired by the text-based SMILES notation for molecules. The text-based format has several advantages compared to flowsheet images regarding the storage format, computational accessibility, and eventually for data analysis and processing. However, the original SFILES version cannot describe essential flowsheet configurations unambiguously, such as the distinction between top and bottom products. Neither is it capable of describing the control structure required for the safe and reliable operation of chemical processes. Also, there is no publicly available software for decoding or encoding chemical process topologies to SFILES. We propose the SFILES 2.0 with a complete description of the extended notation and naming conventions. Additionally, we provide open-source software for the automated conversion between flowsheet graphs and SFILES 2.0 strings. This way, we hope to encourage researchers and engineers to publish their flowsheet topologies as SFILES 2.0 strings. The ultimate goal is to set the standards for creating a FAIR database of chemical process flowsheets, which would be of great value for future data analysis and processing.","Artificial intelligence; FAIR data; Flowsheet graph; Process flow diagram; STRING notation","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:e300954a-0bc8-4906-b715-f2b6901a7700","http://resolver.tudelft.nl/uuid:e300954a-0bc8-4906-b715-f2b6901a7700","Structure Design and Processing Strategies of MXene-Based Materials for Electromagnetic Interference Shielding","Oliveira, Filipa M. (University of Chemistry and Technology Prague); Azadmanjiri, Jalal (University of Chemistry and Technology Prague); Wang, Xuehang (TU Delft RST/Storage of Electrochemical Energy); Yu, Minghao (Technische Universität Dresden); Sofer, Zdeněk (University of Chemistry and Technology Prague)","","2023","The development of new materials for electromagnetic interference (EMI) shielding is an important area of research, as it allows for the creation of more effective and high-efficient shielding solutions. In this sense, MXenes, a class of 2D transition metal carbides and nitrides have exhibited promising performances as EMI shielding materials. Electric conductivity, low density, and flexibility are some of the properties given by MXene materials, which make them very attractive in the field. Different processing techniques have been employed to produce MXene-based materials with EMI shielding properties. This review summarizes processes and the role of key parameters like the content of fillers and thickness in the desired EMI shielding performance. It also discusses the determination of power coefficients in defining the EMI shielding mechanism and the concept of green shielding materials, as well as their influence on the real application of a produced material. The review concludes with a summary of current challenges and prospects in the production of MXene materials as EMI shields.","2D materials; electromagnetic interference shielding; green shielding materials; MXenes; power coefficients; processing strategies","en","review","","","","","","","","","","","RST/Storage of Electrochemical Energy","","",""
"uuid:c499904f-d43f-4fce-b831-0f8234740ceb","http://resolver.tudelft.nl/uuid:c499904f-d43f-4fce-b831-0f8234740ceb","Accumulating ammoniacal nitrogen instead of melanoidins determines the anaerobic digestibility of thermally hydrolyzed waste activated sludge","Pavez Jara, J.A. (TU Delft Sanitary Engineering); van Lier, J.B. (TU Delft Sanitary Engineering); de Kreuk, M.K. (TU Delft Water Management)","","2023","Full-scale thermal hydrolysis processes (THP) showed an increase in nutrients release and formation of melanoidins, which are considered to negatively impact methanogenesis during mesophilic anaerobic digestion (AD). In this research, fractionation of THP-sludge was performed to elucidate the distribution of nutrients and the formed melanoidins over the liquid and solid sludge matrix. Degradation of the different fractions in subsequent AD was assessed, and the results were compared with non-pre-treated waste activated sludge (WAS). Results showed that the THP-formed soluble melanoidins were partially biodegradable under AD, especially the fraction with molecular weight under 1.1 kDa, which was related to protein-like substances. The use of THP in WAS increased the non-biodegradable soluble chemical oxygen demand (sCOD) after AD, from 1.1% to 4.9% of the total COD. The total ammoniacal nitrogen (TAN) concentration only slightly increased during THP without AD. However, after AD, TAN released was 34% higher in the THP-treated WAS compared to non-treated WAS, i.e., 36.7 ± 0.7 compared to 27.4 ± 0.4 mgTANreleased/gCODsubstrate, respectively. Results from modified specific methanogenic activities (mSMAs) tests showed that the organics solubilised during THP, were not inhibitory for acetotrophic methanogens. However, after AD of THP-treated sludge and WAS, the mSMA showed that all analysed samples presented strong inhibition on methanogenesis due to the presence of TAN and associated free ammonia nitrogen (FAN). In specific methanogenic activities (SMAs) tests with incremental concentration of TAN/FAN and melanoidins, TAN/FAN induced strong inhibition on methanogens, halving the SMA at around 2.5 gTAN/L and 100 mgFAN/L. Conversely, melanoidins did not show inhibition on the methanogens. Our present results revealed that when applying THP-AD in full-scale, the increase in TAN/FAN remarkably had a greater impact on AD than the formation of melanoidins.","Ammonium release; Anaerobic digestion; Inhibition; Melanoidins; Pre-treatment; Thermal hydrolysis process","en","journal article","","","","","","","","","","Water Management","Sanitary Engineering","","",""
"uuid:6c9e85e5-0580-4f16-9394-662c5a398b4f","http://resolver.tudelft.nl/uuid:6c9e85e5-0580-4f16-9394-662c5a398b4f","Application of a fluorescent dye-based microfluidic sensor for real-time detection of mAb aggregates","Neves Sao Pedro, M. (TU Delft BT/Bioprocess Engineering); Eppink, Michel H.M. (Wageningen University & Research; Byondis B.V., Nijmegen); Ottens, M. (TU Delft BT/Design and Engineering Education)","","2023","The lack of process analytical technologies able to provide real-time information and process control over a biopharmaceutical process has long impaired the transition to continuous biomanufacturing. For the monoclonal antibody (mAb) production, aggregate formation is a major critical quality attribute (CQA) with several known process parameters (i.e., protein concentration and agitation) influencing this phenomenon. The development of a real-time tool to monitor aggregate formation is then crucial to gain control and achieve a continuous processing. Due to an inherent short operation time, miniaturized biosensors placed after each step can be a powerful solution. In this work, the development of a fluorescent dye-based microfluidic sensor for fast at-line PAT is described, using fluorescent dyes to examine possible mAb size differences. A zigzag microchannel, which provides 90% of mixing efficiency under 30 s, coupled to an UV–Vis detector, and using four FDs, was studied and validated. With different generated mAb aggregation samples, the FDs Bis-ANS and CCVJ were able to robustly detect from, at least, 2.5% to 10% of aggregation. The proposed FD-based micromixer is then ultimately implemented and validated in a lab-scale purification system, demonstrating the potential of a miniaturized biosensor to speed up CQAs measurement in a continuous process.","continuous biomanufacturing; fluorescent dyes; microfluidic sensor; process analytical technology (PAT); protein aggregation","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:814b2ec0-11a1-4080-a782-c43d100ab130","http://resolver.tudelft.nl/uuid:814b2ec0-11a1-4080-a782-c43d100ab130","Interval Markov Decision Processes with Continuous Action-Spaces","Delimpaltadakis, Giannis (Eindhoven University of Technology); Lahijanian, Morteza (University of Colorado); Mazo, M. (TU Delft Team Manuel Mazo Jr); Laurenti, L. (TU Delft Team Luca Laurenti)","","2023","Interval Markov Decision Processes (IMDPs) are finite-state uncertain Markov models, where the transition probabilities belong to intervals. Recently, there has been a surge of research on employing IMDPs as abstractions of stochastic systems for control synthesis. However, due to the absence of algorithms for synthesis over IMDPs with continuous action-spaces, the action-space is assumed discrete a-priori, which is a restrictive assumption for many applications. Motivated by this, we introduce continuous-action IMDPs (caIMDPs), where the bounds on transition probabilities are functions of the action variables, and study value iteration for maximizing expected cumulative rewards. Specifically, we decompose the max-min problem associated to value iteration to |Q| max problems, where |Q| is the number of states of the caIMDP. Then, exploiting the simple form of these max problems, we identify cases where value iteration over caIMDPs can be solved efficiently (e.g., with linear or convex programming). We also gain other interesting insights: e.g., in certain cases where the action set A is a polytope, synthesis over a discrete-action IMDP, where the actions are the vertices of A, is sufficient for optimality. We demonstrate our results on a numerical example. Finally, we include a short discussion on employing caIMDPs as abstractions for control synthesis.","bounded-parameter Markov decision processes; control synthesis; planning under uncertainty; uncertain Markov decision processes; value iteration","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Team Manuel Mazo Jr","","",""
"uuid:8e8b61b4-0289-4d29-a65e-2a350c77306a","http://resolver.tudelft.nl/uuid:8e8b61b4-0289-4d29-a65e-2a350c77306a","Model predictive impedance control with Gaussian processes for human and environment interaction","Haninger, Kevin (Fraunhofer); Hegeler, Christian (Fraunhofer); Peternel, L. (TU Delft Human-Robot Interaction)","","2023","Robotic tasks which involve uncertainty – due to variation in goal, environment configuration, or confidence in task model – may require human input to instruct or adapt the robot. In tasks with physical contact, several existing methods for adapting robot trajectory or impedance according to individual uncertainties have been proposed, e.g., realizing intention detection or uncertainty-aware learning from demonstration. However, isolated methods cannot address the wide range of uncertainties jointly present in many tasks. To improve generality, this paper proposes a model predictive control (MPC) framework which plans both trajectory and impedance online, can consider discrete and continuous uncertainties, includes safety constraints, and can be efficiently applied to a new task. This framework can consider uncertainty from: contact constraint variation, uncertainty in human goals, or task disturbances. An uncertainty-aware task model is learned from a few (≤3) demonstrations using Gaussian Processes. This task model is used in a nonlinear MPC problem to optimize robot trajectory and impedance according to belief in discrete human goals, human kinematics, safety constraints, contact stability, and frequency-domain disturbance rejection. This MPC formulation is introduced, analyzed with respect to convexity, and validated in co-manipulation with multiple goals, a collaborative polishing task, and a collaborative assembly task.","Gaussian processes; Human–robot interaction; Impedance control; Intention detection; Model predictive control","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-11-08","","","Human-Robot Interaction","","",""
"uuid:e50d3d9a-b0ef-44b1-8ca9-15b103f77083","http://resolver.tudelft.nl/uuid:e50d3d9a-b0ef-44b1-8ca9-15b103f77083","Distributionally Robust Strategy Synthesis for Switched Stochastic Systems","Gracia, Ibon (University of Colorado); Boskos, D. (TU Delft Team Dimitris Boskos); Laurenti, L. (TU Delft Team Luca Laurenti); Mazo, M. (TU Delft Team Manuel Mazo Jr)","","2023","We present a novel framework for formal control of uncertain discrete-time switched stochastic systems against probabilistic reach-avoid specifications. In particular, we consider stochastic systems with additive noise, whose distribution lies in an ambiguity set of distributions that are ε−close to a nominal one according to the Wasserstein distance. For this class of systems we derive control synthesis algorithms that are robust against all these distributions and maximize the probability of satisfying a reach-avoid specification, defined as the probability of reaching a goal region while being safe. The framework we present first learns an abstraction of a switched stochastic system as a robust Markov decision process (robust MDP) by accounting for both the stochasticity of the system and the uncertainty in the noise distribution. Then, it synthesizes a strategy on the resulting robust MDP that maximizes the probability of satisfying the property and is robust to all uncertainty in the system. This strategy is then refined into a switching strategy for the original stochastic system. By exploiting tools from optimal transport and stochastic programming, we show that synthesizing such a strategy reduces to solving a set of linear programs, thus guaranteeing efficiency. We experimentally validate the efficacy of our framework on various case studies, including both linear and non-linear switched stochastic systems. Our results represent the first formal approach for control synthesis of stochastic systems with uncertain noise distribution.","Formal synthesis; Safe autonomy; Switched stochastic systems; Uncertain Markov decision processes; Wasserstein distance","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Team Luca Laurenti","","",""
"uuid:fa4df9f2-1aca-40e6-a0ac-83fab2855a8d","http://resolver.tudelft.nl/uuid:fa4df9f2-1aca-40e6-a0ac-83fab2855a8d","Distributed large-scale graph processing on FPGAs","Sahebi, Amin (University of Siena; University of Florence); Barbone, Marco (Imperial College London); Procaccini, Marco (University of Siena; Consorzio Interuniversitario Nazionale per l’Informatica); Luk, Wayne (Imperial College London); Gaydadjiev, G. (TU Delft Quantum & Computer Engineering; TU Delft Quantum Circuit Architectures and Technology; Imperial College London); Giorgi, Roberto (University of Siena; Consorzio Interuniversitario Nazionale per l’Informatica)","","2023","Processing large-scale graphs is challenging due to the nature of the computation that causes irregular memory access patterns. Managing such irregular accesses may cause significant performance degradation on both CPUs and GPUs. Thus, recent research trends propose graph processing acceleration with Field-Programmable Gate Arrays (FPGA). FPGAs are programmable hardware devices that can be fully customised to perform specific tasks in a highly parallel and efficient manner. However, FPGAs have a limited amount of on-chip memory that cannot fit the entire graph. Due to the limited device memory size, data needs to be repeatedly transferred to and from the FPGA on-chip memory, which makes data transfer time dominate over the computation time. A possible way to overcome the FPGA accelerators’ resource limitation is to engage a multi-FPGA distributed architecture and use an efficient partitioning scheme. Such a scheme aims to increase data locality and minimise communication between different partitions. This work proposes an FPGA processing engine that overlaps, hides and customises all data transfers so that the FPGA accelerator is fully utilised. This engine is integrated into a framework for using FPGA clusters and is able to use an offline partitioning method to facilitate the distribution of large-scale graphs. The proposed framework uses Hadoop at a higher level to map a graph to the underlying hardware platform. The higher layer of computation is responsible for gathering the blocks of data that have been pre-processed and stored on the host’s file system and distribute to a lower layer of computation made of FPGAs. We show how graph partitioning combined with an FPGA architecture will lead to high performance, even when the graph has Millions of vertices and Billions of edges. In the case of the PageRank algorithm, widely used for ranking the importance of nodes in a graph, compared to state-of-the-art CPU and GPU solutions, our implementation is the fastest, achieving a speedup of 13 compared to 8 and 3 respectively. Moreover, in the case of the large-scale graphs, the GPU solution fails due to memory limitations while the CPU solution achieves a speedup of 12 compared to the 26x achieved by our FPGA solution. Other state-of-the-art FPGA solutions are 28 times slower than our proposed solution. When the size of a graph limits the performance of a single FPGA device, our performance model shows that using multi-FPGAs in a distributed system can further improve the performance by about 12x. This highlights our implementation efficiency for large datasets not fitting in the on-chip memory of a hardware device.","Accelerators; Distributed computing; FPGA; Graph processing; Grid partitioning","en","journal article","","","","","","","","","","Quantum & Computer Engineering","Quantum Circuit Architectures and Technology","","",""
"uuid:1f945eaf-7b02-4375-a715-a9bb05e2da41","http://resolver.tudelft.nl/uuid:1f945eaf-7b02-4375-a715-a9bb05e2da41","A Copula-Based Bayesian Network to Model Wave Climate Multivariate Uncertainty in the Alboran Sea","Mares Nasarre, P. (TU Delft Hydraulic Structures and Flood Risk); García-Maribona, Julio (DHI Group); Mendoza Lugo, M.A. (TU Delft Hydraulic Structures and Flood Risk); Morales Napoles, O. (TU Delft Hydraulic Structures and Flood Risk)","P. Brito, Mário (editor); Aven, Terje (editor); Baraldi, Piero (editor); Čepin, Marko (editor); Zio, Enrico (editor)","2023","An accurate estimation of wind and wave variables is key for coastal and offshore applications. Recently, copulas have gained popularity for modelling wind and waves multivariate dependence, since accounting for the hydrodynamic relationships between them is needed to ensure reliable estimations of the required design values. In this study, copula-based Bayesian networks (BNs) are explored as a tool to model extreme values of significant wave height (Hs), wave period, wave direction, wind speed and wind direction. The model is applied to a case study located in the Alboran sea, close to the Spanish coast, using ERA5 database. Extreme values of Hs are sampled using Yearly Maxima and concomitant values of the missing variables are used. K-means clustering algorithm is applied to separate the different wave components and a BN is built for each of them. The assumption of modelling the dependence between the variables using Gaussian copulas and the structure of BNs are supported with the d-calibratioson score. Fitted marginal distributions are introduced in the nodes of the BNs and their performance is assessed using in-sample data and the coefficient of determination. The BN models proposed present high performance with a low computational cost proving to be powerful tools for modelling the variables under investigation. Future research will include different locations and databases.","waves; wind; stochastic process; k-means; Bayesian networks; copulas","en","conference paper","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-03-08","","","Hydraulic Structures and Flood Risk","","",""
"uuid:5b4d00c8-92f3-411f-91c0-909b6dc1fc92","http://resolver.tudelft.nl/uuid:5b4d00c8-92f3-411f-91c0-909b6dc1fc92","Joint Maximum Likelihood Estimation of Microphone Array Parameters for a Reverberant Single Source Scenario","Li, C. (TU Delft Signal Processing Systems); Martinez, Jorge (TU Delft Multimedia Computing); Hendriks, R.C. (TU Delft Signal Processing Systems)","","2023","Estimation of the acoustic-scene related parameters such as relative transfer functions (RTFs) from source to microphones, source power spectral densities (PSDs) and PSDs of the late reverberation is essential and also challenging. Existing maximum likelihood estimators typically consider only subsets of these parameters and use each time frame separately. In this paper we explicitly focus on the single source scenario and first propose a joint maximum likelihood estimator (MLE) to estimate all parameters jointly using a single time frame. Since the RTFs are typically invariant for a number of consecutive time frames we also propose a joint maximum likelihood estimator (MLE) using multiple time frames which has similar estimation performance compared to a recently proposed reference algorithm called simultaneously confirmatory factor analysis (SCFA), but at a much lower complexity. Moreover, we present experimental results which demonstrate that the estimation accuracy, together with the performance of noise reduction, speech quality and speech intelligibility, of our proposed joint MLE outperform those of existing MLE based approaches that use only a single time frame.","Dereverberation; maximum likelihood estima- tion; microphone array signal processing; PSD estimation; RTF estimation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-09","","","Signal Processing Systems","","",""
"uuid:f7b33c23-4308-4535-95b1-22b668988152","http://resolver.tudelft.nl/uuid:f7b33c23-4308-4535-95b1-22b668988152","Effects of 3D Concrete Printing Phases on the Mechanical Performance of Printable Strain-Hardening Cementitious Composites","van Overmeir, A.L. (TU Delft Materials and Environment); Šavija, B. (TU Delft Materials and Environment); Bos, Freek P. (Eindhoven University of Technology; Technische Universität München); Schlangen, E. (TU Delft Materials and Environment)","","2023","Several studies have shown the potential of strain-hardening cementitious composites (SHCC) as a self-reinforcing printable mortar. However, papers published on the development of three-dimensional printable SHCC (3DP-SHCC) often report a discrepancy between the mechanical properties of the cast and printed specimens. This paper evaluates the effect of each successive phase of the printing process on the mechanical properties of the composite. To this end, materials were collected at three different stages in the printing process, i.e., after each of mixing, pumping, and extruding. The collected 3DP-SHCC materials were then cast in specimen moulds and their mechanical properties after curing were obtained. The resulting findings were juxtaposed with the mechanical properties of the specimens derived from a fully printed 3DP-SHCC element, and our findings indicate that while the density and the compressive strength are not significantly influenced by the printing process, the flexural and tensile strength, along with their associated deflection and strain, are strongly affected. Additionally, this research identifies the pumping phase as the primary phase influencing the mechanical properties during the printing process.","3DP-SHCC; mechanical properties; 3D concrete printing; printing process; 3DP-ECC","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:d118d0c1-de56-4fea-9728-8e46611ef210","http://resolver.tudelft.nl/uuid:d118d0c1-de56-4fea-9728-8e46611ef210","A survey on the evolution of stream processing systems","Fragkoulis, M. (TU Delft Web Information Systems; Delivery Hero); Carbone, Paris (KTH Royal Institute of Technology; Research Institutes of Sweden RISE); Kalavri, Vasiliki (Boston University); Katsifodimos, A (TU Delft Web Information Systems)","","2023","Stream processing has been an active research field for more than 20 years, but it is now witnessing its prime time due to recent successful efforts by the research community and numerous worldwide open-source communities. This survey provides a comprehensive overview of fundamental aspects of stream processing systems and their evolution in the functional areas of out-of-order data management, state management, fault tolerance, high availability, load management, elasticity, and reconfiguration. We review noteworthy past research findings, outline the similarities and differences between the first (’00–’10) and second (’11–’23) generation of stream processing systems, and discuss future trends and open problems.","Cloud applications; Fault-tolerance; Stream processing; Streaming analytics","en","journal article","","","","","","","","","","","Web Information Systems","","",""
"uuid:543e4675-5d66-4114-b019-48d9c3d95544","http://resolver.tudelft.nl/uuid:543e4675-5d66-4114-b019-48d9c3d95544","Infinite dimensional Piecewise Deterministic Markov Processes","Dobson, P. (TU Delft Statistics; Heriot-Watt University); Bierkens, G.N.J.C. (TU Delft Statistics)","","2023","In this paper we aim to construct infinite dimensional versions of well established Piecewise Deterministic Monte Carlo methods, such as the Bouncy Particle Sampler, the Zig-Zag Sampler and the Boomerang Sampler. In order to do so we provide an abstract infinite dimensional framework for Piecewise Deterministic Markov Processes (PDMPs) with unbounded event intensities. We further develop exponential convergence to equilibrium of the infinite dimensional Boomerang Sampler, using hypocoercivity techniques. Furthermore we establish how the infinite dimensional Boomerang Sampler admits a finite dimensional approximation, rendering it suitable for computer simulation.","Hypocoercivity; Infinite Dimensional Stochastic Process; Piecewise Deterministic Markov Processes; Uniform in time approximation","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:76b83523-5b61-4f2a-ba13-b2625a1f1058","http://resolver.tudelft.nl/uuid:76b83523-5b61-4f2a-ba13-b2625a1f1058","Modeling Multi-Fraction Coastal Aeolian Sediment Transport With Horizontal and Vertical Grain-Size Variability","van IJzendoorn, Christa (TU Delft Coastal Engineering); Hallin, E.C. (TU Delft Coastal Engineering; Lund University); Reniers, A.J.H.M. (TU Delft Environmental Fluid Mechanics); de Vries, S. (TU Delft Coastal Engineering)","","2023","Grain size affects the rates of aeolian sediment transport on beaches. Sediment in coastal environments typically consists of multiple grain-size fractions and exhibits spatiotemporal variations. Still, conceptual and numerical aeolian transport models are simplified and often only include a single fraction that is constant over the model domain. It is unclear to what extent this simplification is valid and if the inclusion of multi-fraction transport and spatial grain-size variations affects aeolian sediment transport simulations and predictions of coastal dune development. This study applies the numerical aeolian sediment transport model AeoLiS to compare single-fraction to multi-fraction approaches for a range of grain-size distributions and spatial grain-size scenarios. The results show that on timescales of days to years, single-fraction simulations with the median grain size, D50, often give similar results to multi-fraction simulations, provided the wind is able to mobilize all fractions within that time frame. On these timescales, vertical variability in grain size has a limited effect on total transport rates, but it does influence the simulation results on minute timescales. Horizontal grain-size variability influences both the total transport rates and the downwind bed grain-size composition. The results provide new insights into the influence of beach sediment composition and spatial variability on total transport rates toward the dunes. The findings of this study can guide the implementation of grain-size variability in numerical aeolian sediment transport models.","aeolian processes; AeoLiS; beaches; coastal processes; grain size; modeling; nearshore processes; sediment transport","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:c777dcac-fe53-463a-beed-25729ba27f8b","http://resolver.tudelft.nl/uuid:c777dcac-fe53-463a-beed-25729ba27f8b","Towards Evaluating Stream Processing Autoscalers","Siachamis, G. (TU Delft Web Information Systems); Kanis, Job (Student TU Delft); Koper, Wybe (Student TU Delft); Psarakis, K. (TU Delft Web Information Systems); Fragkoulis, M. (TU Delft Web Information Systems; Delivery Hero SE); van Deursen, A. (TU Delft Software Technology); Katsifodimos, A (TU Delft Web Information Systems)","","2023","In this work, we evaluate autoscaling solutions for stream processing engines. Although autoscaling has become a mainstream subject of research in the last decade, the database research community has yet to evaluate different autoscaling techniques under a proper benchmarking setting and evaluation framework. As a result, every newly proposed autoscaling solution only performs a shallow performance evaluation and comparison against existing solutions. In this paper, we evaluate autoscaling solutions by employing two streaming queries and a dynamic workload that follows a cosinus pattern. Our experiments reveal that current autoscaling techniques fail to account for generated lag due to rescaling or underprovisioning and cannot efficiently handle practical scenarios of intensely dynamic workloads.","autoscaling; stream processing","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-14","","Software Technology","Web Information Systems","","",""
"uuid:0b70a95c-ec5f-4e8e-8b95-6edcb983e715","http://resolver.tudelft.nl/uuid:0b70a95c-ec5f-4e8e-8b95-6edcb983e715","Exploring Homogeneity and Covariance Matrix Structure of Multistatic/Polarimetric Sea-Clutter Data","Carotenuto, V. (Università degli Studi di Napoli Federico II); Aubry, A. (Università degli Studi di Napoli Federico II); De Maio, A. (Università degli Studi di Napoli Federico II); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2023","The design of bespoke adaptive detection schemes relying on the joint use of multistatic/polarimetric measurements requires a preliminary statistical inference on the clutter interference environment. This is fundamental to develop an analytic model for the received signal samples, which is used to synthesize the radar detector. In this respect, the aim of this paper is the design of suitable learning tools to study some important statistical properties of the sea-clutter environment perceived at the nodes of a multistatic/polarimetric radar system. The study is complemented by the use of radar returns measured with the Netted RADar (NetRAD), which collects simultaneously monostatic and bistatic measurements. Precisely, the homogeneity properties of the data in the slow-time domain are first assessed resorting to Generalized Inner Product (GIP) based statistics. Then, the possible presence of structures in the clutter covariance matrices (both inter and intra channels) is investigated through ad-hoc statistical tools. The results show that the data, regardless the polarimetric/geometric configuration, can be modeled as drawn from a stationary process within the coherence time. Moreover, for both the monostatic and the bistatic returns the structure of the covariance matrix depends upon the polarimetric/geometric configuration of the sensing system.","covariance matrix structure; data homogeneity; Generalized Inner Product (GIP); Multistatic/polarimetric radar; sea-clutter; Spherically Invariant Random Process (SIRP)","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-27","","","Microwave Sensing, Signals & Systems","","",""
"uuid:fa6cbcad-93db-405b-a638-63ba3994f25a","http://resolver.tudelft.nl/uuid:fa6cbcad-93db-405b-a638-63ba3994f25a","Polysulfone nanofiber-modified composite laminates: Investigation of mode-I fatigue behavior and damage mechanisms","Mohammadi, R. (TU Delft Materials and Environment); Akrami, R. (University of Strathclyde); Assaad, Maher (Ajman University); Nasor, Mohamed (Ajman University); Imran, Ahmed (Ajman University); Fotouhi, M. (TU Delft Materials and Environment)","","2023","In this study, the fatigue properties of carbon fiber-reinforced polymer (CFRP) composite laminates were investigated, specifically focusing on the incorporation of 100-µm polysulfone (PSU) nanofibers as an interleaving material. The PSU nanofibers were produced using the electrospinning technique. Both quasi-static and fatigue tests were conducted on both the reference specimens and the modified specimens to evaluate their mode-I performance. The results revealed an 85% increase in fracture toughness (GIC) under quasi-static testing. The fatigue plots revealed a noteworthy reduction in the fatigue crack growth rate (da/dN) for the modified specimens due to new toughening mechanisms. Scanning electron microscopy (SEM) demonstrated that, the PSU nanofiber became melted and distributed in the interface, leading to phase separation and a sea-island structure. The presence of PSU microspheres caused crack deflection during delamination, which resulted in increased fracture and fatigue resistance.","Carbon/epoxy; Electro spinning process; Fatigue crack growth rate; Fracture toughness; Polysulfone nanofiber","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:61ff8660-ea33-4f1c-9106-ee7a9f51991b","http://resolver.tudelft.nl/uuid:61ff8660-ea33-4f1c-9106-ee7a9f51991b","The effect of electric double layers, zeta potential and pH on apparent viscosity of non-Brownian suspensions","Srinivasan, Sudharsan (University of Limerick); van den Akker, H.E.A. (TU Delft ChemE/Transport Phenomena; University of Limerick); Shardt, Orest (University of Limerick)","","2023","We carried out 3D simulations of monodisperse particle suspensions subjected to a constant shear rate with the view to investigate the effect of electrical double layers around the particles on apparent suspension viscosities. To this end, expressions for Debye length, zeta potential, and ionic strength (pH) of the liquid were incorporated into our in-house lattice Boltzmann code that uses the immersed boundary method and includes subgrid lubrication models. We varied the solids concentration and particle radius, keeping the particle Reynolds number equal to 0.1. We report on results with respect to the effect of pH in the range 9 through 12 and of Debye length on apparent viscosity and spatial suspension structures, particularly at higher solids volume fractions, and on the effect of flow reversals.","complex fluids; particulate flows; rheology; solids processing; suspensions","en","journal article","","","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:ab7ccf73-eb89-4560-8577-b3aafe1f5c86","http://resolver.tudelft.nl/uuid:ab7ccf73-eb89-4560-8577-b3aafe1f5c86","Virtual sensing of subsoil strain response in monopile-based offshore wind turbines via Gaussian process latent force models","Zou, Joanna (Massachusetts Institute of Technology); Lourens, E. (TU Delft Dynamics of Structures; TU Delft Offshore Engineering); Cicirello, A. (TU Delft Mechanics and Physics of Structures)","","2023","Virtual sensing techniques have gained traction in applications to the structural health monitoring of monopile-based offshore wind turbines, as the strain response below the mudline, which is a primary indicator of fatigue damage accumulation, is impractical to measure directly with physical instrumentation. The Gaussian process latent force model (GPLFM) is a generalized Bayesian virtual sensing technique which combines a physics-driven model of the structure with a data-driven model of latent variables of the system to extrapolate unmeasured strain states. In the GPLFM, unknown sources of excitation are modeled as a Gaussian process (GP) and endowed with a structured covariance relationship with response states, using properties of the GP covariance kernel as well as correlation information supplied by the mechanical model. It is shown that posterior inference of the latent inputs and states is performed by Gaussian process regression of measured accelerations, computed efficiently using Kalman filtering and Rauch–Tung–Striebel smoothing in an augmented state-space model. While the GPLFM has been previously demonstrated in numerical studies to improve upon other virtual sensing techniques in terms of accuracy, robustness, and numerical stability, this work provides one of the first cases of in-situ validation of the GPLFM. The predicted strain response by the GPLFM is compared to subsoil strain data collected from an operating offshore wind turbine in the Westermeerwind Park in the Netherlands. A number of test cases are conducted, where the performance of the GPLFM is evaluated for its sensitivity to varying operational and environmental conditions, to the instrumentation scheme of the turbine, and to the fidelity of the mechanical model. In particular, this paper discusses the capacity of the GPLFM to achieve relatively robust strain predictions under high model uncertainty in the soil-foundation system of the offshore wind turbine by attributing sources of model error to the estimated stochastic input.","Gaussian process; In-situ validation; Kalman filter; Latent force model; Offshore wind turbine; Response estimation; Structural health monitoring; Virtual sensing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-03","","","Dynamics of Structures","","",""
"uuid:3dd58d3b-4b5a-4eca-aedd-72bc523a2979","http://resolver.tudelft.nl/uuid:3dd58d3b-4b5a-4eca-aedd-72bc523a2979","Dysarthric Speech Recognition, Detection and Classification using Raw Phase and Magnitude Spectra","Yue, Z. (TU Delft Multimedia Computing; King’s College London); Loweimi, Erfan (King’s College London; University of Cambridge); Cvetkovic, Zoran (King’s College London)","","2023","In this paper, we explore the effectiveness of deploying the raw phase and magnitude spectra for dysarthric speech recognition, detection and classification. In particular, we scrutinise the usefulness of various raw phase-based representations along with their combinations with the raw magnitude spectrum and filterbank features. We employed single and multi-stream architectures consisting of a cascade of convolutional, recurrent and fully-connected layers for acoustic modelling. Furthermore, we investigate various configurations and fusion schemes as well as their training dynamics. In addition, the accuracies of the raw phase and magnitude based systems in the detection and classification tasks are studied and discussed. We report the performance on the UASpeech and TORGO dysarthric speech databases and for different severity levels. Our best system achieved WERs of 31.2% and 9.1% for dysarthric and typical speech on TORGO and 30.2% on UASpeech, respectively.","Dysarthric speech processing; raw phase and magnitude spectra; single- and multi-stream acoustic modelling","en","journal article","","","","","","","","","","","Multimedia Computing","","",""
"uuid:a37b0905-b28e-4bb6-ad4c-d213eb2c931e","http://resolver.tudelft.nl/uuid:a37b0905-b28e-4bb6-ad4c-d213eb2c931e","Monolithic fiber/foam-structured catalysts: beyond honeycombs and micro-channels","Zhao, Guofeng (East China Normal University); Moulijn, J.A. (TU Delft ChemE/Product and Process Engineering); Kapteijn, F. (TU Delft ChemE/Catalysis Engineering); Dautzenberg, Frits M. (Serenix Corporation, Fort Collins); Xu, Bin (ECO Zhuo Xin Energy-Saving Technology, Shanghai); Lu, Yong (East China Normal University; Institute of Eco-Chongming)","","2023","Heterogeneous catalysis plays a pivotal role in the current chemical and energy vectors production. Notably, to fully utilize the intrinsic activity and selectivity of a catalyst, the chemical reactor has to be designed and operated optimally to achieve enhanced heat/mass transfer, well-defined contact time of reactants, uniform flow pattern, and high permeability. Structured catalysts are a promising strategy to overcome the major drawbacks encountered in the traditional packed-bed reactor technology due to the improved hydrodynamics in combination with enhanced heat/mass transfer. Newly emerged fiber/foam-substrates, with an entirely open 3D network structure, bring distinct advantages over the honeycomb and micro-channel contacting methods, including free radial diffusion, eddy-mixing driven heat/mass transfer, large area-to-volume ratio, and high contacting efficiency. However, how to place the nanocatalysts onto the fiber/foam-substrates is a challenging problem because the commercial washcoating method has great limitations such as the nonuniformity and easy exfoliation of coatings. This review discusses the newly developed non-dip-coating methods for the fiber/foam-structured catalysts and their promising applications in the strongly exo-/endo-thermic and/or high throughput reaction processes.","Catalytic distillation; catalytic functionalization; electrocatalysis; environmental protection; fiber; foam; heat/mass transfer; heterogeneous catalysis; hydrogenation; monolithic catalyst; non-dip-coating; oxidation; process intensification; reforming; structured catalyst; supercapacitors; syngas conversion","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-02-06","","","ChemE/Product and Process Engineering","","",""
"uuid:6f7a1a07-a318-4f1c-9fee-842a93a65f49","http://resolver.tudelft.nl/uuid:6f7a1a07-a318-4f1c-9fee-842a93a65f49","Design and Manufacturing of an In-Package Relative Humidity Sensor with Multi-Width Interdigital Electrodes Towards Enhanced Sensitivity for Characterization of Packaging Encapsulation Materials","Sattari, R. (TU Delft Electronic Components, Technology and Materials); van Zeijl, H.W. (TU Delft Electronic Components, Technology and Materials); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials)","","2023","This study presents a novel manufacturing process and design towards an enhanced sensitivity of an in-package relative humidity sensor. The device comprises multi-width interdigital electrodes which make oxide pillars appear during wet chemical etching in the fabrication process. Those oxide pillars appear only in wider areas while completely etched away in narrower areas providing semi-floating metal fingers. Therefore, after wafer molding, the packaging encapsulation material such as the epoxy molding compound covers larger area around the electrodes and increases the sensitivity by confining more of the electrical field lines. The results confirm the enhanced sensitivity of the proposed humidity sensor for characterization and monitoring of the aging properties of packaging encapsulation materials.","Encapsulation; Electrodes; Sensitivity; Manufacturing processes; Fingers; Humidity; Reliability engineering","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-09-18","","","Electronic Components, Technology and Materials","","",""
"uuid:f5c2d38c-0bbe-4b2b-8108-1b77be19ff1a","http://resolver.tudelft.nl/uuid:f5c2d38c-0bbe-4b2b-8108-1b77be19ff1a","Structured Electronics Design: A Conceptual Approach to Amplifier Design","Montagne, A.J.M. (TU Delft Electronics)","","2023","Many people consider analog electronic circuit design complex. This is because designers can achieve the desired performance of a circuit in many ways. Together, theoretical concepts, circuit topologies, electronic devices, their operating conditions, and the system's physical construction constitute an enormous design space in which it is easy to get lost. For this reason, analog electronics often is regarded as an art rather than a solid discipline.
Structured Electronics Design:
- Defines a step-by-step hierarchically organized design process.
- Is based on solid principles from systems engineering, physics, signal processing, control theory, and network theory.
- Provides a solid foundation for circuit design education and automation.
- Has been developed at the TU Delft since the 1980s.","electronics; circuit design; circuit topologies; systems engineering; signal processing; control theory","en","book","TU Delft OPEN","978-94-6366-711-1","","","","TU Delft OPEN Textbook","","","","","Electronics","","",""
"uuid:e515be0d-bdb2-4868-8c19-ad4499e084da","http://resolver.tudelft.nl/uuid:e515be0d-bdb2-4868-8c19-ad4499e084da","Transfer learning for process design with reinforcement learning","Gao, Q. (TU Delft ChemE/Product and Process Engineering); Yang, Haoyu (Student TU Delft); Shanbhag, S.M. (TU Delft ChemE/Delft Ingenious Design); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering)","Kokossis, Antonis (editor); Georgiadis, Michael C. (editor); Pistikopoulos, Efstratios N. (editor)","2023","Process design is a creative task that is currently performed manually by engineers. Artificial intelligence provides new potential to facilitate process design. Specifically, reinforcement learning (RL) has shown some success in automating process design by integrating data-driven models that learn to build process flowsheets with process simulation in an iterative design process. However, one major challenge in the learning process is that the RL agent demands numerous process simulations in rigorous process simulators, thereby requiring long simulation times and expensive computational power. Therefore, typically short-cut simulation methods are employed to accelerate the learning process. Short-cut methods can, however, lead to inaccurate results. We thus propose to utilize transfer learning for process design with RL in combination with rigorous simulation methods. Transfer learning is an established approach from machine learning that stores knowledge gained while solving one problem and reuses this information on a different target domain. We integrate transfer learning in our RL framework for process design and apply it to an illustrative case study comprising equilibrium reactions, azeotropic separation, and recycles, our method can design economically feasible flowsheets with stable interaction with DWSIM. Our results show that transfer learning enables RL to economically design feasible flowsheets with DWSIM, resulting in a flowsheet with an 8% higher revenue. And the learning time can be reduced by a factor of 2.","process design; Reinforcement learning; transfer learning","en","book chapter","Elsevier","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-30","","","ChemE/Product and Process Engineering","","",""
"uuid:c34fba35-e976-4f88-a4e1-d212b57aa037","http://resolver.tudelft.nl/uuid:c34fba35-e976-4f88-a4e1-d212b57aa037","Synthesis and optimization of NGL separation as a complex energy-integrated distillation sequence","Li, Q. (TU Delft ChemE/Product and Process Engineering; The University of Manchester); Finn, Adrian J. (The University of Manchester); Doyle, Stephen J. (The University of Manchester); Smith, Robin (The University of Manchester); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering)","Kokossis, Antonis (editor); Georgiadis, Michael C. (editor); Pistikopoulos, Efstratios N. (editor)","2023","The synthesis of heat-integrated distillation sequences for energy-efficient separation of zeotropic multicomponent mixtures is complex due to the many interconnected design degrees of freedom. This paper explores the basis on which reliable screening can be carried out. To solve this problem, a screening algorithm has been developed using optimization of a superstructure for the sequence synthesis using shortcut models, in conjunction with a transportation algorithm for the synthesis of the heat integration arrangement. Different approaches for the inclusion of heat integration are explored and compared. Then the best few designs from this screening are evaluated using rigorous simulations. A case study for the separation of NGL is used to compare options. It has been found that separation problems of the type explored can be screened reliably using shortcut distillation models in conjunction with the synthesis of heat exchanger network designs. Unintegrated designs using thermally coupled complex columns show much better performance than the corresponding designs using simple columns. However, once heat integration is included the difference between designs using complex columns and simple columns narrows significantly.","Distillation sequencing; energy efficiency; process optimization; process synthesis and design","en","book chapter","Elsevier","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-12-30","","","ChemE/Product and Process Engineering","","",""
"uuid:facca5f9-d1ea-4710-9b90-5afb880b8d05","http://resolver.tudelft.nl/uuid:facca5f9-d1ea-4710-9b90-5afb880b8d05","A Decision Tree Induction Algorithm for Efficient Rule Evaluation Using Shannon’s Expansion","Herrera-Semenets, Vitali (Advanced Technologies Application Center); Bustio-Martínez, Lázaro (Iberoamericana University); Hernández-León, Raudel (Advanced Technologies Application Center); van den Berg, Jan (TU Delft Cyber Security)","Calvo, H. (editor); Martínez-Villaseñor, L. (editor); Ponce, H. (editor)","2023","Decision trees are one of the most popular structures for decision-making and the representation of a set of rules. However, when a rule set is represented as a decision tree, some quirks in its structure may negatively affect its performance. For example, duplicate sub-trees and rule filters, that need to be evaluated more than once, could negatively affect the efficiency. This paper presents a novel algorithm based on Shannon’s expansion, which guarantees that the same rule filter is not evaluated more than once, even if repeated in other rules. This fact increases efficiency during the evaluation process using the induced decision tree. Experiments demonstrated the viability of the proposed algorithm in processing-intensive scenarios, such as in intrusion detection and data stream analysis.","Decision Tree; Rule-Based Systems; Data Processing","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-05-09","","","Cyber Security","","",""
"uuid:bcccfa5e-f406-4ac6-8cf2-71eb00b07d07","http://resolver.tudelft.nl/uuid:bcccfa5e-f406-4ac6-8cf2-71eb00b07d07","The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing","Agiollo, A. (TU Delft Interactive Intelligence; Alma Mater Studiorum – Universitá di Bologna); Cavalcante Siebert, L. (TU Delft Interactive Intelligence); Murukannaiah, P.K. (TU Delft Interactive Intelligence); Omicini, Andrea (Alma Mater Studiorum – Universitá di Bologna)","Calvaresi, Davide (editor); Najjar, Amro (editor); Omicini, Andrea (editor); Carli, Rachele (editor); Ciatto, Giovanni (editor); Aydogan, Reyhan (editor); Mualla, Yazan (editor); Främling, Kary (editor)","2023","Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge, a thorough comparison among available explainability techniques is currently missing, mainly for the lack of a general metric to measure their benefits. We compare state-of-the-art local post-hoc explanation mechanisms for models trained over moral value classification tasks based on a measure of correlation. By relying on a novel framework for comparing global impact scores, our experiments show how most local post-hoc explainers are loosely correlated, and highlight huge discrepancies in their results—their “quarrel” about explanations. Finally, we compare the impact scores distribution obtained from each local post-hoc explainer with human-made dictionaries, and point out that there is no correlation between explanation outputs and the concepts humans consider as salient.","eXplainable Artificial Intelligence; Local Post-hoc Explanations; Moral Values Classification; Natural Language Processing","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-04-01","","","Interactive Intelligence","","",""
"uuid:790cc5fe-7213-4992-b84f-993c5d822801","http://resolver.tudelft.nl/uuid:790cc5fe-7213-4992-b84f-993c5d822801","Mach Number Estimation and Pressure Profile Measurements of Expanding Dense Organic Vapors","Head, A.J. (TU Delft Facility Aerodynamics Laboratory); Michelis, Theodorus (TU Delft Aerodynamics); Beltrame, F. (TU Delft Flight Performance and Propulsion); Fuentes Monjas, B. (TU Delft Flight Performance and Propulsion); Casati, Emiliano (ETH Zürich); de Servi, C.M. (TU Delft Flight Performance and Propulsion; Flemish Institute for Technological Research); Colonna, Piero (TU Delft Flight Performance and Propulsion)","White, M. (editor)","2023","This paper describes an experiment conducted within the nozzle test section of the Organic Rankine Cycle Hybrid Integrated Device (ORCHID) aimed at providing accurate data for the validation of NICFD flow solvers [5]. A supersonic flow of the dense vapor siloxane MM established in the nozzle of the setup was characterized by means of the schlieren technique and by pressure taps along the nozzle profile. The nozzle inlet conditions corresponded to a stagnation temperature and pressure of T0=253∘C and P0=18.36bara. At these inlet conditions, the compressibility factor of the fluid is Z0= 0.58. The nozzle backpressure was equal to Pb=2.2bara. The experimental data-set includes: 1) the average mid-plane local Mach number, which was derived from the schlieren images by estimating the angle of the Mach waves originating from the roughness of the upper and lower nozzle surfaces, 2) the angle of a shock wave generated by a 5∘ wedge placed at the nozzle exit, also detectable in the schlieren images, and 3) the static pressure distribution along the flow expansion acquired with a Scanivalve DSA3218 pressure scanner device. The Mach number at the nozzle exit estimated based on the schlieren images is M= 1.95 ± 0.05, very close to the expected value of M= 2 according to the design conditions of the experiment. The static pressure measurements have a maximum absolute uncertainty amounting to ± 1.80 kPa in the initial stages of the expansion. This information was used to assess the capability of the open-source SU2 flow solver in evaluating the NICFD effects in a supersonic flow of MM when the fluid thermodynamic properties are modeled with a cubic equation of state. For this purpose, two-dimensional Euler simulations were carried out with SU2 for the operating conditions achieved in the experiment. The numerical results are in good agreement with the experimental data. The largest deviation between the simulation and experiment is observed in the nozzle uniform region, where two dips in the Mach number occur due to a slight local decrease in flow velocity owing to two weak shock waves. The shock wave generated by the wedge located at the nozzle outlet propagates with two different angles, namely, βabove= 37. 6∘± 0.86, and βbelow= 31. 6∘± 0.64, due to the axial misalignment of the wedge with respect to the flow.","data processing; error identification and uncertainty estimation; Schlieren measurements","en","book chapter","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-11-02","","","Facility Aerodynamics Laboratory","","",""
"uuid:79e0ac50-fab4-4db4-af3f-cca1d280bbd3","http://resolver.tudelft.nl/uuid:79e0ac50-fab4-4db4-af3f-cca1d280bbd3","A Self-supervised Classification Algorithm for Sensor Fault Identification for Robust Structural Health Monitoring","Oncescu, Andreea Maria (University of Oxford); Cicirello, A. (TU Delft Engineering Structures; TU Delft Mechanics and Physics of Structures)","Rizzo, Piervincenzo (editor); Milazzo, Alberto (editor)","2023","A self-supervised classification algorithm is proposed for detecting and isolating sensor faults of health monitoring devices. This is achieved by automatically extracting information from failure investigations. This approach uses (i) failure reports for extracting comprehensive failure labels; (ii) recorded data of a faulty monitoring device and the information of the failure type for selecting fault-sensitive features. The features-label pairs are then used to train a classification algorithm, so that when a new set of measurements becomes available, the algorithm is capable of identifying with a high accuracy one of the possible failure types included in the training data set. The proposed approach is successfully applied to the failure investigations conducted on a low-cost wearable device, displaying similar challenges encountered in SHM.","Monitoring device failure; Natural language processing; Self-supervised machine learning; Sensor failures; SHM","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-12-19","","Engineering Structures","Mechanics and Physics of Structures","","",""
"uuid:c9e12dcd-4238-4eff-a693-5f67c3ab649d","http://resolver.tudelft.nl/uuid:c9e12dcd-4238-4eff-a693-5f67c3ab649d","Multi-objective Black-box Test Case Prioritization based on Wordnet Distances","van Dinten, I. (TU Delft Software Engineering); Zaidman, A.E. (TU Delft Software Engineering); Panichella, A. (TU Delft Software Engineering)","Arcaini, Paolo (editor); Yue, Tao (editor); Fredericks, Erik M. (editor)","2023","Test case prioritization techniques have emerged as effective strategies to optimize this process and mitigate the regression testing costs. Commonly, black-box heuristics guide optimal test ordering, leveraging information retrieval (e.g., cosine distance) to measure the test case distance and sort them accordingly. However, a challenge arises when dealing with tests of varying granularity levels, as they may employ distinct vocabularies (e.g., name identifiers). In this paper, we propose to measure the distance between test cases based on the shortest path between their identifiers within the WordNet lexical database. This additional heuristic is combined with the traditional cosine distance to prioritize test cases in a multi-objective fashion. Our preliminary study conducted with two different Java projects shows that test cases prioritized with WordNet achieve larger fault detection capability (APFD C ) compared to the traditional cosine distance used in the literature.","Empirical Software Engineering; Search-Based Software Testing; Test Case Prioritization; Wordnet; Natural Language Processing","en","conference paper","Springer","","","","","","","2024-06-24","","","Software Engineering","","",""
"uuid:8dbb5bd0-730c-4f74-9a4d-0c7947ec9bdb","http://resolver.tudelft.nl/uuid:8dbb5bd0-730c-4f74-9a4d-0c7947ec9bdb","Where a Little Change Makes a Big Difference: A Preliminary Exploration of Children’s Queries","Pera, M.S. (TU Delft Web Information Systems); Murgia, Emiliana (University of Milan); Landoni, Monica (University of Lugano); Huibers, Theo (University of Twente); Aliannejadi, Mohammad (Universiteit van Amsterdam)","Kamps, Jaap (editor); Goeuriot, Lorraine (editor); Crestani, Fabio (editor); Maistro, Maria (editor); Joho, Hideo (editor); Davis, Brian (editor); Gurrin, Cathal (editor); Caputo, Annalina (editor); Kruschwitz, Udo (editor)","2023","This paper contributes to the discussion initiated in a recent SIGIR paper describing a gap in the information retrieval (IR) literature on query understanding–where they come from and whether they serve their purpose. Particularly the connection between query variability and search engines regarding consistent and equitable access to all users. We focus on a user group typically underserved: children. Using preliminary experiments (based on logs collected in the classroom context) and arguments grounded in children IR literature, we emphasize the importance of dedicating research efforts to interpreting queries formulated by children and the information needs they elicit. We also outline open problems and possible research directions to advance knowledge in this area, not just for children but also for other often-overlooked user groups and contexts.","Children; Queries; Query processing; Search","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-09-17","","","Web Information Systems","","",""
"uuid:1c6c2732-6fa7-4356-a71e-6387048932d2","http://resolver.tudelft.nl/uuid:1c6c2732-6fa7-4356-a71e-6387048932d2","SECLEDS: Sequence Clustering in Evolving Data Streams via Multiple Medoids and Medoid Voting","Nadeem, A. (TU Delft Cyber Security); Verwer, S.E. (TU Delft Cyber Security)","Amini, Massih-Reza (editor); Canu, Stéphane (editor); Fischer, Asja (editor); Guns, Tias (editor); Kralj Novak, Petra (editor); Tsoumakas, Grigorios (editor)","2023","Sequence clustering in a streaming environment is challenging because it is computationally expensive, and the sequences may evolve over time. K-medoids or Partitioning Around Medoids (PAM) is commonly used to cluster sequences since it supports alignment-based distances, and the k-centers being actual data items helps with cluster interpretability. However, offline k-medoids has no support for concept drift, while also being prohibitively expensive for clustering data streams. We therefore propose SECLEDS, a streaming variant of the k-medoids algorithm with constant memory footprint. SECLEDS has two unique properties: i) it uses multiple medoids per cluster, producing stable highquality clusters, and ii) it handles concept drift using an intuitive Medoid Voting scheme for approximating cluster distances. Unlike existing adaptive algorithms that create new clusters for new concepts, SECLEDS follows a fundamentally different approach, where the clusters themselves evolve with an evolving stream. Using real and synthetic datasets, we empirically demonstrate that SECLEDS produces high-quality clusters regardless of drift, stream size, data dimensionality, and number of clusters. We compare against three popular stream and batch clustering algorithms. The state-of-the-art BanditPAM is used as an offline benchmark. SECLEDS achieves comparable F1 score to BanditPAM while reducing the number of required distance computations by 83.7%. Importantly, SECLEDS outperforms all baselines by 138.7% when the stream contains drift. We also cluster real network traffic, and provide evidence that SECLEDS can support network bandwidths of up to 1.08 Gbps while using the (expensive) dynamic time warping distance.","Sequence Clustering; k-medoids; stream processing; network traffic sampling","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-09-17","","","Cyber Security","","",""
"uuid:3ac97883-01e6-4ac6-950d-fe12231b57f0","http://resolver.tudelft.nl/uuid:3ac97883-01e6-4ac6-950d-fe12231b57f0","Radar Sensing in Healthcare: Challenges and Achievements in Human Activity Classification & Vital Signs Monitoring","Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Guendel, Ronny (TU Delft Microwave Sensing, Signals & Systems); Kruse, N.C. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","Rojas, Ignacio (editor); Valenzuela, Olga (editor); Rojas Ruiz, Fernando (editor); Herrera, Luis Javier (editor); Ortuño, Francisco (editor)","2023","Driven by its contactless sensing capabilities and the lack of optical images being recorded, radar technology has been recently investigated in the context of human healthcare. This includes a broad range of applications, such as human activity classification, fall detection, gait and mobility analysis, and monitoring of vital signs such as respiration and heartbeat. In this paper, a review of notable achievements in these areas and open research challenges is provided, showing the potential of radar sensing for human healthcare and assisted living.","human activity classification; machine learning; Radar sensing; radar signal processing; vital signs monitoring","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2024-01-15","","","Microwave Sensing, Signals & Systems","","",""
"uuid:b30a5dcc-81e6-436b-bdf3-c55155d6d82d","http://resolver.tudelft.nl/uuid:b30a5dcc-81e6-436b-bdf3-c55155d6d82d","Peculiarities and Experience of W-Band Cloud Radar Calibration","Yanovsky, Felix J. (National Aviation University); Pitertsev, Aleksander A. (National Aviation University); Unal, C.M.H. (TU Delft Atmospheric Remote Sensing); Russchenberg, H.W.J. (TU Delft Geoscience and Remote Sensing)","","2023","This paper is devoted to discussing peculiarities of W-band cloud radar calibration. After a brief overview of meteorological radar calibration methods for quantitative information retrieval, we focus on problems and their possible solutions with respect to mm-wave radar calibration. The experimental part of the research is based on multi-instrument measurements performed during several years in the Cabauw experimental meteorological site in the Netherlands. The accumulated data are used for comparison of 94 GHz radar rain measurements with non-radar droplet size distribution measurements, provided by laser disdrometers. Calculations are done taking into account data of other in situ meteorological measurements. A specialized MATLAB software tool for processing such complex data and radar calibration is developed and demonstrated.","data integrity; data processing; electromagnetic scattering; radar measurements; radar remote sensing; sensor fusion; signal processing","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","","","2024-03-28","","Geoscience and Remote Sensing","Atmospheric Remote Sensing","","",""
"uuid:6aed720a-b970-43c2-9c84-f028a8127230","http://resolver.tudelft.nl/uuid:6aed720a-b970-43c2-9c84-f028a8127230","Supporting Electronic Mental Health with Artificial Intelligence: Thought Record Analysis and Guidance","Burger, Franziska (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (promotor); Delft University of Technology (degree granting institution)","2022","This thesis investigates how artificial intelligence can support e-mental health for depression, i.e. the delivery of treatment and prevention interventions for depression using technology. E-mental health for depression is a promising means for bridging the treatment gap since it addresses many of the barriers that prevent people in need of help from seeking or obtaining it. Additionally, many systems have been found to be effective in controlled trials. However, as human support for e-health interventions decreases so do their effectiveness and users’ adherence. While one possible explanation is that human support is a necessary ingredient of a successful intervention, another is that the technology is not satisfying the needs of users to the best of its abilities. This finding inspired us to take a closer look at the technological implementation of the functionality of these systems. To this end, we developed a set of scales that assess the technological sophistication of the functional components of systems, the e-mental health degree of technological sophistication (eHDTS) scales. In a systematic literature review of the field, we then divided all systems developed between 2000 and 2017 for the prevention or treatment of depression reported in the scientific literature into their functional components and rated those components with the eHDTS scales. We found that most systems that had been developed until 2017 were low-tech implementations, consisting mostly of psychoeducation and having a one-way information stream from system to user. This clearly contrasts with face-to-face therapy in which the therapist closely attends to the patient and provides his or her knowledge and insight strategically to signal understanding and empathy, foster self-reflection, teach, or obtain more information. Based on this consideration, we set out to develop a conversational agent capable of signaling to the user that it had processed the content of what it had been told when completing a thought record together with a user in dialog with the hypothesis that this would be able to motivate the user to complete more thought records and feel more engaged. Thought recording is a core technique of cognitive therapy in which patients are asked to systematically monitor their thinking in situations that caused a maladaptive response. Cognitive theory posits that the negative, cognitive appraisals that are responsible for the low mood experienced in patients with depression stem from maladaptive schemas, i.e., beliefs that we hold as truths about the world, ourselves, and the future. To get the conversational agent to “understand” the thoughts provided by the user from this cognitive theory perspective, we collected a corpus of thought records from Amazon Mechanical Turk workers, manually coded the thoughts with respect to the underlying schema, and trained various machine learning models to do the same labeling. A set of deep neural networks outperformed the other algorithms and was then deployed in the conversational agent. We used a between-subjects design to expose 308 participants recruited from Prolific to the conversational agent. The three conditions differed with respect to the feedback-giving capabilities of the conversational agent in response to a thought record: low feedback richness entailed an acknowledgment of the completion of the thought record (thanking the user), medium feedback richness entailed the acknowledgment plus feedback on the process (how many steps the user did in relation to his or her previous thought records), and rich feedback richness entailed medium feedback richness combined with feedback on the content (an interpretation of the thought record with respect to the underlying schema). While all users were able to complete the thought records with the conversational agent, we did not find supportive evidence that the agent’s feedback strategy could increase users’ motivation to complete more thought records or their self-reported engagement in self-reflection. Future research may investigate why we observed these null results by studying whether the feedback is processed correctly, whether a population with depression that is motivated by a wish to get healthy might behave or experience the system differently from our sample that was recruited online and did not meet diagnostic criteria for depression, or whether more advanced social and interaction capabilities need to accompany the complex feedback for it to be believable.","computerized therapy; conversational agents; natural language processing; cognitive therapy","en","doctoral thesis","","978-94-6469-147-4","","","","","","2022-11-24","","","Interactive Intelligence","","",""
"uuid:cdca9bf1-3e6b-4bfc-9d9d-b5acdd3f900d","http://resolver.tudelft.nl/uuid:cdca9bf1-3e6b-4bfc-9d9d-b5acdd3f900d","Generalized Models of Sequential Decision-Making under Uncertainty","Neustroev, G. (TU Delft Algorithmics)","de Weerdt, M.M. (promotor); Verzijlbergh, R.A. (copromotor); Delft University of Technology (degree granting institution)","2022","Sequential decision-making under uncertainty is an important branch of artificial intelligence research with a plethora of real-life applications. In this thesis, we generalize two fundamental properties of the decision-making process. First, we show that the theory on planning methods for finite spaces can be extended to infinite but countable spaces. Second, we propose a unified model of reinforcement learning algorithms that employ the principle of optimism in the face of uncertainty. This model is used to explain why these methods are efficient. We use the developed theory to design novel algorithms. Depending on the user's needs, these algorithms can either automate the decision-making process completely, or provide advice in decision-support systems.
We start with presenting the basic concepts from the theory of decision-making and discuss the two approaches to it: planning and reinforcement learning. We look at a few typical sequential decision-making problems of increasing difficulty. In particular, we present a game that involves grid navigation and the problems of warehouse management and wind farm operation. Next, we survey the state-of-the-art methods for solving such problems.
Based on this analysis, we identify the following research opportunities. In planning, models with non-stationary and countably-infinite data remain relatively untreated because they are equivalent to infinitely-dimensional optimization problems, which are notoriously difficult to solve even approximately. In reinforcement learning, optimistic approaches lead to computational efficiency, yet the theory of optimism remains undeveloped. Moreover, while reinforcement learning shines at playing games, such as chess, shōgi, Go, and StarCraft II, its practical applications remain few.
Next, we overview a mathematical framework of sequential decision-making under uncertainty known as the Markov decision process. We explain how the goal of the decision-maker can be expressed as an optimization problem and present two approaches to achieving this goal. The first—more common—approach assigns so-called values to different actions. The other approach uses so-called occupancies that tell how often the agent should choose the actions instead of evaluating how good these actions are. In fact, the two approaches are known to be dual to each other. While this duality is well studied in the finite case, the infinite case is less explored. To address this knowledge gap, we present a new dual formulation for countable problems, both finite and infinite.
Afterwards, we use the dual formulation to design a new planning algorithm for infinite-horizon problems with non-stationary data. These problems are essentially infinite-dimensional optimization problems and as such are impossible to solve exactly using the standard approaches. We show that they can be solved by changing what is defined as optimal behavior: instead of seeking universally optimal policies, we consider initial-decision-optimal ones. Instead of planning all of the actions beforehand, these policies can be used to plan given the currently observed data. When the next decision is required, the process can be repeated in the same manner, leading to an optimal decision-making strategy. Our approach uses the occupancy-value duality to rule out suboptimal actions based on so-called truncations: finite-time approximations of the infinite-horizon decision-making problem.
We extend the truncation approach to a more general setting of decision-making problems with countably-infinite state spaces. Instead of time-based truncations, we consider state-based ones. This allows us to limit the amount of data required to make the decisions and to design an algorithm for a class of problems that are otherwise unsolvable to optimality. This approach belongs to a family of methods called policy iteration: starting from an initial policy, it constructs a series of improvements in the decisions while ruling out choices that are provably suboptimal.
After that, we turn to reinforcement learning. For a long time, the only provably efficient reinforcement-learning methods were model-based ones; recently, a family of model-free optimistic methods emerged, each of them accompanied by an analysis of how sample-efficient the method is. We, too, study optimistic reinforcement learning, but in contrast to the existing research, we seek to understand not how efficient it is, but why it is efficient. Our analysis results in a formula that explains the three factors that cause regret—the efficiency loss—in optimistic reinforcement learning: the problem size, the measure of exploration, and the estimation error caused by the mismatch between the realized transitions and their true distribution. It can be applied to all of the existing algorithms as well as new ones. We design one such new algorithm and show how our theoretical framework can facilitate the proof of its efficiency.
Finally, we consider a high-impact real-world sequential decision-making problem known as active wake control. Wind turbines can negatively impact each other with their wakes. These wake-induced losses can be reduced by changing the turbine orientations. Unfortunately, the optimal control strategy is non-trivial. To address this, existing approaches use simplified wake models in combination with numerical optimization methods; instead we propose to use model-free reinforcement learning. As a first step towards this goal, we present a wind farm simulator that is suitable for reinforcement learning and better reflects the realities of wind farm operation than other existing tools. Using this simulator, we show that previous research used a suboptimal action representation in this problem; we identify two alternatives, both of which improve the learning efficiency. Additionally, we demonstrate that reinforcement learning is robust to errors in the observations, providing further evidence that it is a fitting approach to active wake control.
Our contributions advance the state of the art in the theory of sequential decision-making under uncertainty and its applications. These advances hint at unexplored connections between countably-infinite planning and optimistic learning, which may lead to even more efficient algorithms for sequential decision-making under uncertainty in the future.","sequential decision-making under uncertainty; optimization; Markov decision processes; planning; linear programming; duality; reinforcement learning; optimistic learning","en","doctoral thesis","","978-94-6366-624-4","","","","","","","","","Algorithmics","","",""
"uuid:3b180cf0-8ff7-4dba-a76a-09709271141a","http://resolver.tudelft.nl/uuid:3b180cf0-8ff7-4dba-a76a-09709271141a","PHA biosynthesis, recovery, and application: A circular value chain for production of self-healing concrete from waste","Vermeer, C.M. (TU Delft BT/Environmental Biotechnology)","Kleerebezem, R. (promotor); Jonkers, H.M. (promotor); Delft University of Technology (degree granting institution)","2022","Polyhydroxyalkanoates (PHA) are a family of biopolymers produced intracellular
by a range of different bacteria.PHA have attracted widespread attention as
an environmental friendly replacement of fossil-based polymers, because they
have thermoplastic and/or elastomeric properties, and are also biobased and
biodegradable.Moreover, the properties of PHA can be adjusted by tuning the
monomeric composition of the polymer.Currently, more than 150 different monomers
have been discovered which can form the building blocks of the PHA polymer.
PHA production can be divided in three parts: biosynthesis, recovery, and application.
The first part is the biotechnological production of bacteria with PHA
inside their cell. First, an organic substrate can be anaerobically converted into
volatile fatty acids (VFA). These VFA form the preferred substrate for PHA production
by bacteria in the next steps. An approach to make PHA biosynthesis
cost-effective is by using organic waste streams as substrate in combination with
mixed microbial communities. This reduces the relatively large costs for raw materials
and for sterilization of the equipment. Thus far, at least 19 pilot projects
have been operated to produce PHA from municipal or industrial organic waste
streams using this approach. In nearly all cases, the random copolymer poly(3-
hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) was produced, indicating that the
biosynthesis of this specific polymer is reasonably well-established. Research on
the production of other types of PHA from waste streams is still scarce (Chapter
2 and 3).
The main obstacles that prevent the large-scale industrial implementation of
waste-derived PHA are the recovery and the application step. First of all, the PHA
recovery costs are responsible for a large fraction of the total production cost
due to high energy and chemical demand. Another challenge of the PHA recovery
step is to achieve a high and consistent quality product when waste is used
as substrate. More research is required to predict the relationship between raw
material input, process parameters, and final mechanical properties of the produced
PHA accurately (Chapter 4).
For the application of PHA, it appeared that introducing waste-derived PHA into
the conventional plastic market is a lasting and complicated procedure. This is
mainly caused by a lack of distribution channels, a lack of experience in bioplastic
processing, and by the small scale at which PHA is currently produced compared
to petrochemical plastics. Therefore, the market entry of waste-derived
PHA could have a higher chance of success if the initial aim is not to produce
bioplastics. Instead, the focus should be on new applications where minor fractions
of impurities, and small variations in polymer characteristics are not regarded
as problematic. Such a niche application can stimulate the introduction
of waste-derived PHA into the market, while avoiding the obstacles and the complexity
of the conventional plastic industry. Moreover, these applications can
potentially exploit the unique properties of PHA (e.g., biodegradability) more effectively
(Chapter 5).
The aim of this thesis was to optimize and balance waste-derived PHA biosynthesis
with recovery, and to target for a niche application of PHA in self-healing
concrete. To this end, research was conducted on all parts of the value chain
from waste to self-healing concrete: PHA biosynthesis (Chapter 2 and 3), PHA
recovery (Chapter 4), and the application of PHA (Chapter 5).
Chapter 2 investigates isobutyrate as sole carbon source for a microbial enrichment
culture in comparison to its structural isomer butyrate. Isobutyrate is a
VFA appearing in multiple waste valorization routes, such as anaerobic fermentation,
chain elongation, and microbial electrosynthesis, but has never been assessed
individually on its PHA production potential. The results reveal that the
enrichment of isobutyrate has a very distinct character regarding microbial community
development, PHA productivity, and even PHA composition. Although
butyrate is a superior substrate in almost every aspect, this research shows that
isobutyrate-rich waste streams have a noteworthy PHA producing potential. The
main finding is that the dominant microorganism, a Comamonas sp., is linked
to the production of a unique PHA family member, poly(3-hydroxyisobutyrate)
(PHiB), up to 37% of the cell dry weight. This chapter is the first scientific report
identifying microbial PHiB production, demonstrating that mixed microbial communities
can be a powerful tool for discovery of new metabolic pathways and
new types of polymers.
In Chapter 3, another uncommon VFA was examined for PHA production, octanoate.
Several enrichment strategies were tested to select for a community
with a high medium-chain-length PHA (mcl-PHA) storage capacity when feeding
octanoate. Based on the analysis of the metabolic pathways, the hypothesis was
formulated that mcl-PHA production is more favorable under oxygen limited conditions
than short-chain-length PHA (scl-PHA). This hypothesis was confirmed by
bioreactor experiments showing that oxygen limitation during the PHA accumulation
resulted in a higher fraction of mcl-PHA over scl-PHA (i.e., a PHA content
of 76 wt% with a mcl-fraction of 0.79 with oxygen limitation, compared to a PHA
content of 72 wt% with a mcl-fraction of 0.62 without oxygen limitation). Physicochemical
analysis revealed that the extracted PHA could be separated efficiently
into a hydroxybutyrate-rich fraction and a hydroxyhexanoate/hydroxyoctanoaterich
fraction. The ratio between the two fractions could be adjusted by changing
the environmental conditions. Almost all enrichments were dominated by
Sphaerotilus sp. This chapter is the first scientific report that links this genus to
mcl-PHA production, demonstrating that microbial enrichments can be a powerful
tool to explore mcl-PHA biodiversity and to discover novel industrially relevant
strains. In solvent extraction of PHA, the choice of solvent has a profound influence
on many aspects of the process design.
Chapter 4 provides a framework to perform a systematic solvent screening
for PHBV extraction. First, a database was constructed of 35 solvents that were
assessed according to six different selection criteria. Then, six solvents were
chosen for further experimental analysis, including 1-butanol, 2-butanol, 2-ethyl
hexanol (2-EH), dimethyl carbonate (DMC), methyl isobutyl ketone (MIBK), and
acetone. The main findings are that the extractions with acetone and DMC obtained
the highest yields (91-95%) with reasonably high purities (93-96%), where
acetone had a key advantage of the possibility to use water as anti-solvent. Moreover,
the results provided new insights in the mechanisms behind PHBV extraction
by pointing out that at elevated temperatures the extraction efficiency is less
determined by the solvent’s solubility parameters and more determined by the
solvent size. Although case-specific factors play a role in the final solvent choice,
we believe that this chapter provides a general strategy for the solvent selection
process.
In Chapter 5, a niche application for waste-derived PHA is proposed and
tested, using it as bacterial substrate in self-healing concrete. Self-healing concrete
is an established technology developed to overcome the inevitable problem
of crack formation in concrete structures, by incorporating a so-called bacteriabased
healing agent. Currently, this technology is hampered by the cost involved
in the preparation of this healing agent. This chapter provides a proof-of-concept
for the use of waste-derived PHA as bacterial substrate in healing agent. The
results show that a PHA-based healing agent, produced from PHA unsuitable
for thermoplastic applications, can induce crack healing in concrete specimens,
thereby reducing the water permeability of the cracks significantly compared to
specimens without a healing agent. For the first time these two emerging fields
of engineering, waste-derived PHA and self-healing concrete, both driven by the
need for environmental sustainability, are successfully linked. We foresee that
this new application will facilitate the implementation of waste-derived PHA technology,
while simultaneously supplying circular and potentially more affordable
raw materials for self-healing concrete.
Chapter 6 will provide a general discussion where overarching topics were
selected for a thorough analysis. Finally, recommendation for further research
are proposed and an outlook for the field is given.
Focusing on this knowledge gap, the aim of this work is to develop an understanding of the effect of the casting parameters on the meso-level structure of cast glass, and thereupon of the relationship between this meso-level structure and the strength, stiffness and fracture resistance of cast glass components. Towards this aim, the dissertation adopts an experimental approach based on physical prototyping by kiln-casting, and destructive and non-destructive testing. The experimental work shows that by kiln-casting, a larger variety of chemical compositions can be cast, even at relatively low processing temperatures. As a consequence, a broad range of mechanical properties arises, especially when waste cullet is employed. Based on the casting parameters, combinations of different defects, grouped in meso-level structures, are commonly found in cast glass, yet these can often be tolerable when situated in the glass bulk. The dissertation highlights the potential of recycling-by-casting of currently challenging to recycle glass waste into reliable and aesthetically unique structural components, and the advantages of engineering composite cast glasses. It also underlines the need for manufacturing guidelines, test data, product certifications and quality control protocols, for the successful implementation of cast glass in the built environment.
In this thesis, we extend the theory and the applications of stochastic duality in the following two contexts:
i) evolution of particles in space inhomogeneous settings and more precisely, processes in random environment
and processes in a multi-layer system;
ii) evolutions of particles in the continuum.","Interacting particle systems; Markov Processes; Hydrodynamic limit; Stochastci Duality; Non-equilibrium steady state; Random environment; Stochastic Homogenization; Boundary driven systems; Inhomogeneous system","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:b46b14e3-c0cf-4aca-a21d-b7eeda6eb2df","http://resolver.tudelft.nl/uuid:b46b14e3-c0cf-4aca-a21d-b7eeda6eb2df","Surface-related multiple estimation and removal with focus on shallow water","Zhang, D. (TU Delft ImPhys/Computational Imaging)","Verschuur, D.J. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2022","For exploration and development of the earth, seismic surveys are acquired to provide information about the subsurface, within specifications of accuracy set by geologists and engineers, and within business constraints on budgets and turn-around time for processing and interpretation of the data. The case of seismic surveys that are acquired, partly or entirely, in shallow water is relevant for the industry worldwide. However, the acquisition and processing for shallow water seismic surveys requires considerable modifications of standard procedures to meet the survey goals. In this work, the focus is on modifications in processing and in particular with respect to the handling of multiply scattered energy, assuming standard acquisition practices.
Multiple scattering is a significant wave phenomenon when seismic waves propagate through the earth. Its corresponding energy, i.e., seismic multiples, are usually unwanted due to the interference with primary reflections. The traditional seismic surface-related multiple estimation and removal method is limited by both the unrecorded data reconstruction (e.g., the missing near offsets and the data gap between the crosslines) and the subsequent multiple adaptive subtraction performance. These issues become even more severe for the shallow-water environment, which is typically defined as being around 50-200 m within the exploration seismic frequency range (i.e., 2-120 Hz) in this thesis. Shallow water creates highly curved seismic reflection events with strong lateral amplitude variations, and complex overlap between primaries and surface-related multiples. Conventional data reconstruction methods fail to tackle the missing data in shallow water, and are even more problematic in 3D. In addition, the dilemma between primary damage and surface multiple leakage during the adaptive subtraction is very much present for shallow-water data.
An integrated closed-loop surface-related multiple estimation (CL-SRME) and full-wavefield migration (FWM) framework for better primary and surface-related multiple estimation, which is able to support CL-SRME with good-quality near offsets in order to avoid primary estimation failure that typically occurs in shallow-water environments, is proposed to attack the unrecorded data reconstruction issue. We suggest to use multiples to provide information on the missing near-offset data by using FWM, where primaries and surface multiples together create an image of the shallow subsurface. Taking advantage of FWM - with its closed-loop simultaneous primaries and multiples imaging approach - as the data reconstruction method and feeding the reconstructed near offsets to CL-SRME are the most important components to tackle the shallow-water issues in a physically consistent manner. This new integrated framework will have its main impact on a full 3D implementation with coarse sampling. Therefore, a similar cascaded framework for 3D surface-related multiple estimation in shallow-water scenarios, which consists of a data reconstruction step via 3D FWM and a surface multiple estimation step via a 3D SRME-type method, is also introduced in the thesis. Improvements on estimating surface multiples and primaries, due to good data reconstruction via FWM, have been proved on both 2D and 3D synthetic data. Despite of lacking an accurate subsurface velocity model for 2D field data, the FWM reconstructed near-offset water-bottom reflection still improves the quality of the estimated surface multiples and primaries.
In order to mitigate the surface-related multiple adaptive subtraction dilemma, we have also introduced a two-step framework for surface multiple leakage extraction in this thesis, and thus extended our seismic multiple processing toolbox. The aforementioned two-step framework based on local primary-and-multiple orthogonalization (LPMO) is both versatile and efficient for leaked multiple extraction, therefore, primaries can be better preserved without leaving much multiple energy. The initial estimation step usually prefers SRME with a conservative adaptive subtraction or any conservative multiple estimation method, and LPMO is followed to compensate the initial estimated primaries and multiples. Promising multiple leakage extraction has been achieved on both synthetic and field data sets. Although effective compared to standard subtraction, LPMO is slow and computationally intensive. Therefore, a fast LPMO (FLPMO) using a scaled point-by-point division, rather than the time-consuming shaping regularization-based iterative inversion, is further introduced to accelerate the whole process. Results on two different field data sets display a very similar multiple leakage extraction performance compared to LPMO, while indicating that the scaled point-by-point division in FLPMO is approximately 40 times faster than the shaping regularization-based inversion in LPMO. Moreover, the complete FLPMO framework is approximately four times faster than the LPMO framework, and thereby is now equivalent to the industry-standard L2 adaptive subtraction.
With the advance of deep learning (DL) technology, the aforementioned two issues in shallow water can also be investigated via a U-Net based DL neural network (NN) framework. More specifically, a DL-based de-aliasing NN is introduced for the initial surface multiple estimation, where the strong data fitting power of DL can directly project the aliased multiples, due to coarse sampling, to its corresponding unaliased target multiples. Meanwhile, a DL-based adaptive subtraction NN is proposed with both total full wavefield and the predicted multiples as two input channels to overcome the adaptive subtraction dilemma. In this way, the robust physics, i.e., the estimated multiples, is used and the synthetic primary labels can be helpful to the framework. Note that the data distribution between training and test data plays a significant role on these U-Net based applications. Training on field data and test on nearby field data shows the best performance due to a similar data distribution.
Shallow water is very challenging for surface-related multiple estimation. Physics-based deterministic approaches, e.g., FWM-based data reconstruction and LPMO, can help geophysicists better understand and partially solve the essentials of the problem. For poorly described deterministic problems, e.g., adaptive subtraction and multiple de-aliasing, DL can find the underlying relationships that are not easily achievable by the deterministic methods. Combination of deterministic methods and DL will result in an optimal performance. This is where further research should concentrate on.
Yet, WCRS is a specialistic branch within the coastal engineering and -user community. The technique typically requires a certain amount of user-expertise and it has mostly been applied in research settings. While data can be retrieved on kilometre scale with XBand-radars and cameras, it was historically difficult to scale up WCRS to entire coasts, which was a reason to discontinue its application in the Netherlands. Besides land-based instruments (i.e., XBand-radars, fixed camera stations) in the meantime also airborne UAVs, and space-borne satellites can be used to record a wave field, making WCRS more flexible and scalable. These recording instruments have also become more accessible. Moreover, DIAs – the software required to analyse the wave recordings – can be used interchangeably on data of these different instruments. This means that WCRS becomes potentially attractive to a broad user-community of coastal managers, the industry and the coast guard. However, DIAs still restrict broad usage of WCRS: while an important step has been taken in the open accessibility of DIAs, much is still to be gained in their handling and computational speed. This study aims to improve upon that, by building towards operational, self-adaptive and intelligent algorithms, which can provide maps of depth, near-surface currents and wave hydrodynamics on-the-fly. For this purpose, video data from a variety of instruments (fixed camera station, UAV, XBandradar, satellite) on different spatial scales 𝑂(100 m2,1 km2,10 km2,100 km2) and field-sites around the world (Netherlands, UK, USA, Australia, France) are analysed. Combining rapid processing capabilities with a broad applicability this study forms a stepping stone for a potentially broad WCRS user community. The analyses are presented going from land-based to air-borne to space-borne WCRS. This is done in three stages from (1) applying an operational DIA on XBand radar data, to (2) applying an on-the-fly DIA on camera and UAV data, to finally (3) applying a DIA on temporally sparse satellite data.
First, a DIA named XMFit (X-Band Matlab Fitting) is introduced, which is robust, accurate and fast enough for operational use. This is achieved through an iterative procedure that selects the best result among a series of depth and near-surface current estimates. For this study, video data from XBand-radars are analysed. Focusing on depth estimates, XMFit is validated for two case studies in the Netherlands: (1) the “Sand Engine”, a beach mega nourishment at a uniform open coast, and (2) the tidal inlet of the Dutch Wadden Sea island Ameland, characterizing a more complex coast. Considering both sites, the algorithm performance is characterized by a spatially averaged depth bias of −0.9 m at the Sand Engine (corresponding to an 18 h snapshot of the field site) and a time-varying bias of approximately −2–0 m at the Ameland Inlet (corresponding to a one-year time evolution with varying hydrodynamic conditions). When compared to in-situ depth surveys the accuracy is lower, but the time resolution higher. Dutch in-situ surveys typically occur annually, while depth estimates from the Ameland tidal inlet are produced every 50 min by an operational system using a navigational X-Band radar. It enables to monitor the placement of a 5 Mm3 ebb-tidal delta nourishment – a pilot measure for coastal management. Volumetric changes in the nourishment area over the year 2018, occurring at 7 km distance from the radar, are estimated with an error of 7 %. Depth errors statistically correlate with the direction and magnitude of simultaneous near-surface current estimates. Additional experiments on Sand Engine data demonstrate that depth errors may be significantly reduced using an alternative spectral approach and/or by using a Kalman filter.
Having demonstrated the potential of DIAs for operational application, the next step is to design an algorithm that can self-adapt to video from any field-site and can process it on-the-fly. To do so, a DIA is designed whose code architecture for the first time includes the Dynamic Mode Decomposition (DMD) to reduce the data complexity of wavefield video. The DMD is paired with loss-functions to handle spectral noise, and a novel spectral storage system and Kalman filter to achieve fast converging measurements. The algorithm is showcased for videos from ARGUS stations and drones recorded at fieldsites in the USA, UK, Netherlands, and Australia. The performance with respect to mapping bathymetry is validated using ground truth data. It is demonstrated that merely 32 s of video footage is needed for a first mapping update with average depth errors of 0.9–2.6 m. These further reduce to 0.5–1.4 m as the videos continue and more mapping updates are returned. Simultaneously, coherent maps for wave direction and -celerity are achieved as well as maps of local near-surface currents. The algorithm is capable of mapping the coastal parameters on-the-fly and thereby offers analysis of video feeds, such as from drones or operational camera installations. Hence, the innovative application of analysis techniques like the DMD enables both accurate and unprecedentedly fast coastal reconnaissance.
With a skilled, intelligent DIA at hand, the question remains whether it can also be used on satellite imagery, as that would further broaden the application range. DIAs commonly analyse video from shore-based camera stations, UAVs or XBandradars with durations of minutes and at framerates of 1–2 fps to find relevant wave frequencies. However, these requirements are typically not met by raw, temporally sparse satellite imagery. To overcome this problem a preprocessing step is utilized. Here, a sequence of 12 images of Capbreton, France, collected over a period of ∼1.5 min at a framerate of 1/8 fps by the Pleiades satellite, is augmented to a pseudo-video with a framerate of 1 fps. For this purpose a recently developed method is used, which considers spatial pathways of propagating waves for temporal video reconstruction. The resulting video is subsequently processed with the self-adaptive DIA. The combination of image augmentation with a frequency-based depth inversion method shows potential for broad application to temporally sparse satellite imagery and thereby aids in the effort towards broad usage of WCRS for mapping coastal bathymetry data around the globe.
By improving DIAs and their application to different instruments, this study has helped to increase the technological readiness of WCRS and its potential to be adopted by end-users. It was shown that WCRS can be performed on wave field records of land-based, airborne and space-born instruments and therewith on scales ranging from 𝑂(100 m2)(fixed camera) to 𝑂(100 km2)(X-band radar,satellite). The cost of WCRS is minor, as existing navigational X-band radars can be used, affordable UAVs and cameras, and accessible satellite data. X-band radars can operationally monitor complex coastal environments and recognize morphological trends, UAVs and cameras can be used for fast lean-and-mean mapping of coastal bathymetry, and by estimating depths from satellite imagery valuable data can be collected in otherwise data-poor environments. Yet, further steps should be taken in the accessibility, multifunctionality, quality, robustness and user-friendliness of WCRS. The key takeaway for effective WCRS monitoring is that future developments should strive towards integrated, self-adaptive software, which gives prompt visual response and requires little user-expertise. These measures reduce the difficulty to learn WCRS, increase its compatibility with data from different instruments (Xband-radars, cameras, UAVs, satellites) and thereby enable relatively easy coastal measurements. As a consequence WCRS becomes more adoptable by the coastal remote sensing community. With the exponential growth of data volumes worldwide, future data clouds may facilitate storage and offer future perspectives for online integration of data with numerical models and modern data science techniques like neural networks. This may create new possibilities for understanding system dynamics and thereby further aid decision makers in coastal management, the industry and the coast guard.","coastal remote sensing; mapping; depth inversion; wave field video; operational monitoring; on-the-fly processing; self-adaptive algorithms; XBand-radar; camera; UAV; drone; satellite","en","doctoral thesis","","978-94-6384-377-5","","","","","","","","","Coastal Engineering","","",""
"uuid:7fe64066-bb46-41fa-836f-060a78f8177e","http://resolver.tudelft.nl/uuid:7fe64066-bb46-41fa-836f-060a78f8177e","High-dimensional scaling limits of piecewise deterministic sampling algorithms","Bierkens, G.N.J.C. (TU Delft Statistics); Kamatani, Kengo (Osaka University); Roberts, Gareth O. (University of Warwick)","","2022","Piecewise deterministic Markov processes are an important new tool in the design of Markov chain Monte Carlo algorithms. Two examples of fundamental importance are the bouncy particle sampler (BPS) and the zig–zag process (ZZ). In this paper scaling limits for both algorithms are determined. Here the dimensionality of the space tends towards infinity and the target distribution is the multivariate standard normal distribution. For several quantities of interest (angular momentum, first coordinate and negative log-density) the scaling limits show qualitatively very different and rich behaviour. Based on these scaling limits the performance of the two algorithms in high dimensions can be compared. Although for angular momentum both processes require only a computational effort of O(d) to obtain approximately independent samples, the computational effort for negative log-density and first coordinate differ: for these BPS requires O(d2) computational effort whereas ZZ requires O(d). Finally we provide a criterion for the choice of the refreshment rate of BPS.","exponential ergodicity; Gaussian process; Markov chain Monte Carlo; Piecewise deterministic Markov processes; weak convergence","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Statistics","","",""
"uuid:f6a6096d-9eb1-4a77-b376-34e01b817011","http://resolver.tudelft.nl/uuid:f6a6096d-9eb1-4a77-b376-34e01b817011","Statistical post processing of extreme weather forecasts","Velthoen, J.J. (TU Delft Statistics)","Jongbloed, G. (promotor); Cai, J. (copromotor); Delft University of Technology (degree granting institution)","2022","In this thesis we develop several statistical methods to estimate high conditional quantiles to use for statistical post-processing of weather forecasts. We propose methodologies that combine theory from extreme value statistics and machine learning algorithms in order to estimate high conditional quantiles in large covariate spaces. In applications of weather forecasting we show improved predictive skill for precipitation forecasts.","Extreme quantile regression; Statistical post-processing; Extreme value theory; Extreme conditional quantile; Variable selection; Random Forest; Gradient boosting","en","doctoral thesis","","9789083272726","","","","","","","","","Statistics","","",""
"uuid:a1908c73-2bc4-4947-944e-2bd3a177bfe6","http://resolver.tudelft.nl/uuid:a1908c73-2bc4-4947-944e-2bd3a177bfe6","Urn models and other approaches to risk and tails, with applications in risk management and climatology","Cheng, D. (TU Delft Applied Probability)","Redig, F.H.J. (promotor); Cirillo, P. (promotor); Delft University of Technology (degree granting institution)","2022","This dissertation collects three scientific contributions, already published in international peer-reviewed journals, plus some extra considerations and work-in-progress. First, we present a model based on reinforced urn processes, which conjugates to the right-censored recovery process, and empirically apply it to the time series of recovery rates. We perform a very thorough empirical study, including how different priors affect the posterior predictive distribution, how our model is updated with the empirical data during the global financial crisis, and we make predictions. Second, we apply a bivariate reinforced process derived from a Generalized Polya Urn scheme to model the linear dependence between the probability of default and the loss given default. Third, we offer a new perspective with Stochastic Poisson equation to deal with Spatio-temporal extremes. As it will be clear, the leit motiv of this thesis is the analysis of risk using different tools, from urn models to extreme value theory. In particular, we have focused on two risk applications: the modelling of credit risk in some of its declinations, and the prediction of the joint tail behavior of extreme sea surface temperature (SST) anomalies for the Red Sea. Almost every financial contract is affected by credit risk, that is the risk of changes in the creditworthiness of a counterparty. Financial economists, market participants, bank supervisors, and regulators have all paid close attention to credit risk measurement, pricing, and management. The probability of default, the recovery rate, and their dependence are fundamental aspects of the credit risk. Measuring credit risk accurately is pivotal for four reasons. First, for financial economists, credit risk measures are very important for pricing credit risk portfolios, credit derivatives, etc. The importance of credit risk in the pricing of financial contracts has been underlined by the global financial crisis. Second, during the management process of credit risk for companies, the accurate credit risk measure can help the management team better determine their risk appetite. Third, the well-known Basel capital requirements are calculated using credit risk measure. Fourth, the accurate estimation of the credit risk can help a manager improve decisions. For example, in the recovery activities after default, more effort will be put on the individual with a high estimated LGD to reduce the large loss. During my PhD studies I have also took part in several conferences, among which the 11th international conference on Extreme Value Analysis (EVA 2019). In attending this conference, I decided to participate in one of the proposed challenges for young scholars, something that led to thewriting of one of the contributions of thiswork,which also won the first prize in the competition.","Reinforced Urn Process; Credit Risk; Probability of Default; Loss Given Default; Extreme Value Theory; Stochastic Poisson Equation; Spatiotemporal Data","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:710d7600-a6c3-4ad2-ada0-214005e28cb2","http://resolver.tudelft.nl/uuid:710d7600-a6c3-4ad2-ada0-214005e28cb2","Unfolding the Early Fatigue Damage Accumulation for Cross-ply Laminates","Li, X. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Zarouchas, D. (copromotor); Delft University of Technology (degree granting institution)","2022","Fatigue damage of composite laminates has attracted considerable attention from research community and industry, in view that laminated structures are inevitable to suffer from fatigue loading during their service life. It is rather complicated to understand and explain, what governs the initiation, accumulation, interaction (synergy or competition) of different damage mechanisms. Intrinsic and extrinsic scatter sources are hard to eliminate during the fatigue testing of laminates, which produce significant dispersion of laboratory data and further hinder our understanding about the progressive accumulation process of fatigue damage. Consequently, most of fatigue damage models for laminates are mathematically fitted to existing experimental data, rather than being related to the mechanisms of damage accumulation process. Considering the majority of stiffness is degraded during the early fatigue life before the final failure of laminates, the objective of this thesis is to investigate the accumulation of matrix-dominant damage, with the possible scatter phenomena taken into account. Carbon fiber/polymer laminates with cross-ply configurations were selected as the research target as they have been increasingly used for aerospace structures due to the light weight. Experimental set-up involving multiple damage monitoring systems was developed to in-situ characterize and quantify the initiation and accumulation of transverse cracks and delamination. To further investigate this scattering of crack evolution among specimens, a strength-based probabilistic model is developed. Overall, this thesis provides a clear picture about the interactive scheme of early fatigue damage accumulation of CFRP cross-ply laminates, which further enhance our understanding about the development of physics-based fatigue damage models for FRP laminates.","Fatigue; Composites; Stochastic matrix cracking; Delamination; Damage interaction; Stiffness degradation; Poisson’s ratio; Acoustic emission; Digital image correlation; Image processing; In-situ monitoring; Probabilistic damage model","en","doctoral thesis","","978-94-6384-363-8","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:be761a38-a87a-4e87-9343-7f838ccf6c89","http://resolver.tudelft.nl/uuid:be761a38-a87a-4e87-9343-7f838ccf6c89","Modelling and Analysis of Atrial Epicardial Electrograms: An approach based on graph signal processing and confirmatory factor analysis","Sun, M. (TU Delft Signal Processing Systems)","van der Veen, A.J. (promotor); Hendriks, R.C. (promotor); de Groot, N.M.S. (promotor); Delft University of Technology (degree granting institution)","2022","Atrial fibrillation (AF) is a frequently encountered cardiac arrhythmia characterized by rapid and irregular atrial activity, which increases the risk of strokes, heart failure and other heart-related complications. The mechanisms of AF are complicated. Although various mechanisms were proposed in previous research, the precise mechanisms of AF are not clear yet and the optimal therapy for AF patients are still under debated. A higher success rate of AF treatments requires a deeper understanding of the problem of AF and potentially a better screening of the patients.
In order to study AF, instead of using human body surface ECGs, we use the epicardial electrograms (EGMs) obtained directly from the epicardial sites of the human atria during open heart surgery. This data is measured using a high-resolution mapping array and exhibits irregular properties during AF. Although different studies have analyzed electrograms in time and frequency domain, there remain many open questions that require alternative and novel tools to investigate AF.
Experience in signal processing suggests that incorporating the spatial dimension into the time-frequency analysis on the multi-electrode electrograms may provide improved insights on the atrial activity. However, the electrophysiologcial models for describing spatial propagation are relatively complex and non-linear such that conventional signal processing methods are less suitable for a joint space, time, and frequency domain analysis. It is also difficult to use very detailed electrophysiologcial models to extract tissue parameters related to AF from the high-dimensional data.
In this dissertation, we wish to propose a radically different approach to study and analyze the EGMs from a higher abstraction level and from different perspectives to get more understanding of the characteristics of AF. We also aim to develop a simplified electrophysiological model that can capture the spatial structure of the data and propose an efficient method to estimate the tissue parameters, which are helpful to analyze the electropathology of the tissue, e.g., cell activation time or conductivity.
In the first part of this study, we put forward a graph-time spectral analysis framework to analyze EGMs during normal heart rhythm and AF with a higher-level model. To capture the frequency content along both time domain and graph domain, we propose the joint graph and short-time Fourier transform, which allows us to evaluate the temporal and spatial variation of EGMs and capture the interaction between space and time. The spectral analysis of the EGMs helps us to recognize atrial fibrillation impact on the atrial activity and identify the differences between the atrial activity and the ventricular activity. We find that the difference in graph smoothness between the atrial and ventricular activities enables us to better extract the atrial activity from the noisy measurements.
The second part of this study is to find a simplified but accurate enough electrophysiological model for the high dimensional EGMs and to make more efficient use of the data to detect the arrythmogenic substrate that causes abnormalities in atrial tissue. In this dissertation, we develop the cross power spectral density matrix (CPSDM) model of the multi-electrode EGMs and make use of an effective method called confirmatory factor analysis (CFA) to jointly estimate the model parameters. The conductivity, the activation time, and the anisotropy ratio are useful parameters to determine abnormalities in cardiac tissue and are therefore the target parameters to be estimated. With the reasonable assumptions that the conductivity parameters and the anisotropy parameters are constant across different frequencies and heart beats, and the activation time of cells are constant across different frequencies, we propose simultaneous CFA (SCFA) to jointly estimate these parameters using multiple frequencies and multiple heart beats. The identifiability conditions which need to be satisfied in the CFA problem are used to find the relationship between the desired resolution and the required amount of data. Evaluations on the simulated data and the clinical data demonstrate that the proposed method can localize the conduction blocks in the tissue and reconstruct the clinical EGMs well using the estimated parameters.","Atrial fibrillation; epicardial electrograms; spectral analysis; graphtime signal processing; electrophysiological model; cross-power spectral density matrix model; conductivity estimation; activation time estimation; anisotropy ratio estimation; confirmatory factor analysis","en","doctoral thesis","","978-94-6366-545-2","","","","","","","","","Signal Processing Systems","","",""
"uuid:f5ab45a2-376b-48e7-b83d-55f755c12e3b","http://resolver.tudelft.nl/uuid:f5ab45a2-376b-48e7-b83d-55f755c12e3b","Direction of arrival estimation and self-calibration techniques using an array of acoustic vector sensors: Theory, algorithms and applications","Nambur Ramamohan, K. (TU Delft Signal Processing Systems)","Leus, G.J.T. (promotor); Delft University of Technology (degree granting institution)","2022","Microphones are the most popular devices used to convert sound into electrical signals. However, with the advent of sensor technology, transducers capable of measuring vector quantities are opening up many new possibilities. One such device is an acoustic vector sensor (AVS), which measures both acoustic pressure and particle velocity, and has shown promising results with distinct advantages. In this work, we explore the characteristics of AVS arrays and their variations in comparison to the conventional microphone arrays for the purpose of direction-of-arrival estimation of far-field sound sources. Furthermore, we also look into one of the practical aspects of calibrating the AVS arrays and propose novel techniques to address this issue.","acoustic vector sensor; direction-of-arrival; self-calibration; array signal processing; spatial under-sampling; Cramér-Rao lower bound","en","doctoral thesis","","978-94-6366-556-8","","","","","","","","","Signal Processing Systems","","",""
"uuid:c44f8490-da62-4f7c-9945-3cdb6fe0a7a4","http://resolver.tudelft.nl/uuid:c44f8490-da62-4f7c-9945-3cdb6fe0a7a4","A bird's-eye view on infrasound: High-resolution methods to unravel the ambient microbarom wavefield","den Ouden, O.F.C. (TU Delft Applied Geophysics and Petrophysics)","Evers, L.G. (promotor); Smets, P.S.M. (copromotor); Delft University of Technology (degree granting institution)","2022","","infrasound; microbaroms; sensor technology; soundscapes; array processing","en","doctoral thesis","","978-94-6366-487-5","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:171ba94a-e8f4-4969-b6ed-912d4f334968","http://resolver.tudelft.nl/uuid:171ba94a-e8f4-4969-b6ed-912d4f334968","Reliable numerical algorithms for the Non-linear Fourier Transform of the KdV equation","Prins, Peter J. (TU Delft Team Sander Wahls)","Wahls, S. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2022","b>Research question
The topic of this dissertation is the numerical computation of the forward and inverse Non-linear Fourier Transform (NFT) for the Korteweg–de Vries equation (KdV), for sampled signals that decay sufficiently fast on both sides. With NFTs certain non-linear Partial Differential Equations (PDEs) can be solved in a way that is analogous to solving linear Ordinary Differential Equations (ODEs) and PDEs by means of the ordinary Fourier transform. Similarly to the linear Fourier transform, NFTs can be used to analyse, synthesise, filter and predict signals. Existing numerical NFT algorithms suffer from either or both a limited accuracy or a long computation time, which limit the usability of the KdV-NFT for engineering problems. In this dissertation we develop new algorithms that achieve a higher accuracy or require a shorter computation time.
Design methods
We implemented existing numerical algorithms in Mathworks Matlab in floating point arithmetic to analyse their behaviour. Thereafter we designed new algorithms that avoid the undesirable behaviour of the existing algorithms. We demonstrated the improvements by means of benchmark tests. Furthermore we implemented some of the new algorithms in the programming language C in the Fast Non-linear Fourier Transform (FNFT) software library.
Results
We have developed algorithms to compute the continuous KdV-NFT spectrum and the eigenvalues and norming constants of the discrete KdV-NFT spectrum. Furthermore we developed an algorithm to compute the contribution of the discrete spectrum to the inverse KdV-NFT. The continuous KdV-NFT spectrum can now be computed with a fast algorithm at a comparable error tolerance to the Non-linear Schrödinger Equation (NSE)-NFT. That means that the computational complexity has been reduced from O(D^2) to O(D(log(D))^2), where D is the number of samples, without a significant deterioration of the accuracy. The eigenvalues of the discrete KdV-NFT spectrum can now be computed reliably and more efficiently than before. The norming constants can now be computed in all known cases without the anomalous errors that were observed for older algorithms. That means an improvement of the accuracy by several orders of magnitude. The contribution of the inverse KdV-NFT can now be computed for discrete spectra with three to seven times as many eigenvalues in comparison to previously available algorithms.
Conclusions and applications
The KdV can be used as a model for nearly linear wave phenomena that propagate in one direction. These are found in a plethora of physical applications. The algorithms that we presented in this dissertation can be used for the analysis, synthesis, filtering and prediction of sampled data from such systems. Their higher accuracy and/or shorter computation time thus brings the KdV-NFT a step closer to the engineering practice.","signal processing algorithms; non-linear Fourier transform (NFT); Korteweg–de Vries (KdV) equation; Schrödinger equation; water wave; soliton; norming constant; exponential splittings; dressing method; Darboux transform; Crum transform","en","doctoral thesis","","978-94-6384-320-1","","","","","","","","","Team Sander Wahls","","",""
"uuid:b0724d54-822e-433a-8104-f7016374575d","http://resolver.tudelft.nl/uuid:b0724d54-822e-433a-8104-f7016374575d","Increasing FAIRness by sustainable modelling of interactions of parties with land administration systems","Vranić, Saša; Matijević, Hrvoje; Roić, Miodrag; Cetl, Vlado","","2022","In recent years, mobile and web applications are being used extensively and availability of data, including geospatial data has increased dramatically. With the outbreak of Covid-19 this was emphasised even more. The emphasis from large IT systems has shifted towards modular service-oriented systems. This allows easier upgrading and adding of specific components. OGC standards have been available since early 2000s with aim to provide a common base for the dissemination of geospatial data. These standards are mostly depending on XML format to provide data and metadata. In the current technological stack, XML became unsuitable, too complex to handle by various clients (mobile/web applications, various devices). That has been addressed by OGC as well and in 2018 activities on the new set of OGC API standards has started. The overall of these standards follows the overall aim of OGC, to make geospatial data FAIR (Findable, Accessible, Interoperable and Reusable). Land Administration System (LAS) data also have a geospatial component and have been extensively using OGC standards for dissemination. Land Administration Domain Model (LADM) provides a common conceptual model for modelling LAS. LADM is being revised and the second edition having a wider scope by adding support for modelling marine spaces, land valuation and spatial planning. It also brings changes on existing classes related to land registration to address issues recognized by a wide range of scientists and practitioners involved in LADM. However, LADM is missing support for modelling various interactions that are available in the current technological environment. Nowadays LAS data are usually published via web applications (geoportals) where users can in an interactive manner browse LAS data. LADM supports modelling only formal procedures such as registering a building, splitting a parcel, retrieving a certificate (map, ownership). This paper is focused on a standard-based implementations of interactions of people to LAS data and explores options to make land administration processes and data FAIR. We first define a hierarchical organization of interactions of parties with LAS. Then, we integrate the concept of LADM into the current version of LADM Edition II and show how existing formal processes and LADM classes LA_Source and VersionedObject are integrated with interactions. At the end, we present how these interactions fit into the concepts defined by OGC API standards. Also, to prove feasibility of developed concept, we give an example of implementation with open-source library pygeoapi which is an OGC API Reference Implementation.","Interaction; LADM; FAIR; OGC API; Process","en","conference paper","","","","","","","","","","","","","",""
"uuid:06ff0b5e-d5da-4149-a90f-62064c29f238","http://resolver.tudelft.nl/uuid:06ff0b5e-d5da-4149-a90f-62064c29f238","Molten Metal Oscillatory Behaviour in Advanced Fusion-based Manufacturing Processes","Ebrahimi, Amin (TU Delft Team Marcel Hermans)","Richardson, I.M. (promotor); Kleijn, C.R. (promotor); Delft University of Technology (degree granting institution)","2022","The growing demand for manufactured products with complex geometries requiring advanced fusion-based manufacturing techniques emphasises the importance of process development and optimisation to reduce the risk of adverse outcomes, which is currently impeded with traditional approaches (trial and error experiments). Development, optimisation and qualification of such procedures are often expensive and time-consuming, particularly when new materials or new material combinations are involved. Process stability is intrinsically linked to the stability of the molten metal melt-pool, which ideally should solidify in a smooth and continuous manner to produce a consistent product, free of undesirable geometric and metallurgical defects. The influence of material properties and process conditions on melt-pool stability are generally difficult to derive from experimental observations; hence process optimisation is often reliant on a trial-and-error approach, mitigated to a large extent by a considerable body of industrial experience.
The challenge addressed in this research is to develop a simulation-based approach to assess the stability of oscillating melt-pools in fusion welding and additive manufacturing, to minimise the number of trial-and-error experiments required for process development and optimisation, which ultimately will lead to shortening the time between design and production. The computational model developed in the present work has a generic construction with specific process influences addressed through appropriate boundary conditions, avoiding the necessity to integrate melt pool and detailed process descriptions in a single simulation. The model is therefore capable of representing a wide range of welding and additive manufacturing technologies through selection of appropriate material properties and boundary conditions. The robustness of the present computational model in predicting the melt-pool behaviour is demonstrated by comparing the numerical predictions with experimental, analytical and numerical data.
Focusing on numerical simulations of solidification and melting using the enthalpy-porosity method, the influence of the permeability coefficient (also known as the mushy-zone constant) on the numerical predictions, which is employed to dampen fluid velocities in the mushy zone and suppress them in solid regions, is systematically analysed for both isothermal and non-isothermal phase-change problems. For isothermal phase-change problems, reducing the cell size diminishes the influence of the mushy-zone constant on the results and the solution becomes independent of the mushy-zone constant for fine enough meshes. Numerical predictions of non-isothermal phase-change problems are inherently dependent on the mushy-zone constant. A method is proposed, based on a Péclet number, to predict and evaluate the influence of the permeability coefficient on numerical predictions of solidification and melting problems.
In many numerical studies in the literature, the transport coefficients of the material, specifically thermal conductivity and viscosity, are artificially increased by a so-called `enhancement factor' to achieve agreement between experiments and numerically predicted melt-pool sizes and solidification rates. However, the use of an enhancement factor has little physical meaning, does not represent the physics of complex transport phenomena and can significantly affect the numerical predictions. The effects of using enhancement factors on the numerical predictions of melt-pool behaviour in fusion welding and additive manufacturing are studied in detail. Moreover, the effects of employing temperature-dependent material properties on the numerical predictions are discussed in the present thesis.
Melt pools in fusion welding and additive manufacturing exhibit highly non-linear responses to variations of process parameters and are very sensitive to imposed boundary conditions. Temporal and spatial variations in the energy-flux distribution, which are often neglected in numerical simulations, are taken into account in the present work. It is shown how deformations of the melt-pool surface, due to fluid motion as well as changes in the system orientation, affect the numerical predictions of thermal and fluid flow fields. The effects of joint shape design on melt-pool behaviour during fusion welding is also studied in the present work.
Changes in power-density and force distributions affect the thermal and fluid flow fields on the melt-pool surface, which in turn can influence the pool shape. Oscillations strongly relate to shape and size of the melt-pool and the surface tension distribution on the molten material surface. Using the simulation-based approach developed in the present work, the frequency and amplitude of melt-pool oscillations and changes in the oscillation modes are predicted, which are not accessible using published analytical models and are generally difficult to measure experimentally. Additionally, using the proposed simulation-based approach, the need for triggering of the melt-pool oscillations is obviated, since even small surface displacements are detectable, which are not sensible to the current measurement devices employed in experiments.
The dynamic features of the oscillation signals cannot easily be derived employing conventional Fourier transform (FT) analysis since the oscillation signals are assumed to be stationary (i.e. the behaviour of the system is linear and time-invariant), which is often not the case in fusion welding and additive manufacturing. The continuous wavelet transform (CWT) has been employed in the present work to overcome the shortcomings of the conventional fast Fourier transform (FFT) analysis in characterising the non-stationary features of the surface oscillation signals received from the melt pool. Employing the continuous wavelet transform, the time-resolved melt-pool surface oscillation signals obtained from the numerical simulations can be decomposed into time and frequency spaces simultaneously.
The simulation-based approach developed in the present work addresses some of the significant challenges involved in assessing the melt-pool stability for process development and optimisation. The numerical predictions of the present computational model enhances the current understanding of the process behaviour, which is often very challenging to achieve from experiments alone. Moreover, the present simulation-based approach can be employed to explore the design space and reduce the costs associated with process development and optimisation.","Materials processing; Fusion welding and additive manufacturing; Process design and optimisation; Melt pool behaviour; Computational modelling","en","doctoral thesis","","9789464237412","","","","","","","","","Team Marcel Hermans","","",""
"uuid:2bd86c48-8b81-4a66-838f-c85bdb7db334","http://resolver.tudelft.nl/uuid:2bd86c48-8b81-4a66-838f-c85bdb7db334","Efficient Methods for Spectral Geometry Processing","Nasikun, A. (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Hildebrandt, K.A. (copromotor); Delft University of Technology (degree granting institution)","2022","Research in geometry processing concerns the design of algorithms and mathematical models for the analysis and manipulation of geometric data. Examples of its applications are shape projection (e.g. smoothing and filtering), shape correspondence (e.g. functional maps), shape descriptors (e.g. heat and wave kernel signatures), segmentation, and surface parameterization. A set of tools that have proven to be useful for solving such tasks are spectral methods. In general, spectral methods solve geometry processing problems by taking the benefit of the spectra and the eigenfunctions of the Laplacian operator defined on a surface mesh. This allows us to extend the notion of Fourier analysis from signal and image processing to surface processing, a theoretically sound and well-researched concept. In practice, the decomposition of the Laplacian operator into a diagonal matrix of eigenvalues and a rectangular matrix of eigenvectors enables efficient treatment of a broad range of geometry processing problems.
A main adversity in spectral geometry processing is the expensive computational cost attached to the eigendecomposition of the Laplacian operator, before we can use the spectra and the eigenfunctions for the applications. Since analytical solutions are not known, one needs to opt for a numerical method to solve the eigenvalue problem. It is a numerically expensive computation, especially for a complex mesh. Another challenge comes from the storage requirement. Considering that the Laplace--Beltrami operator has global support, it takes a dense matrix to represent the eigenvectors. Therefore, the memory requirement for saving the eigenbasis can be high, particularly when a large number of eigenfunctions need to be stored. These challenges hinder the use of spectral methods for geometry processing applications.
In this thesis, we introduce new methods addressing the aforementioned challenges. In Chapter 2, we propose a fast algorithm that allows for approximating the smallest eigenvalues and the corresponding eigenvectors of the Laplace--Beltrami operator in just a fraction of the time needed to solve the original eigenvalue problem. We construct subspaces of the space of all functions that include low frequency functions and restrict the solution of the eigenproblem to the subspace. It enables the fast approximation of the eigenproblem, independent of the size of the original problem. Our novel scheme also enables significantly more efficient storage of the approximated eigenfunctions. We show that the approximated spectra are close to the reference spectra and that the fast approximation method benefits geometry processing applications, such as shape classification, geodesic distance computation, shape projection (e.g. filtering), and vibration modes of deformable objects.
We consider localized eigenfields of the Hodge--Laplacian, which serve as a sparse basis for the efficient design and processing of tangential fields, in Chapter 3. The basis spans subspaces of the spaces of tangential vector, $n$-vector, and tensor fields on a surface mesh. Restricting the design and processing of tangential fields to the subspace allows us to decouple the degrees of freedom we use for design and processing tasks from the complexity of the mesh representation. The construction is scalable, so we can efficiently compute and store subspaces for large meshes. We evaluate the performance of the novel method on various modeling and processing tasks in vector fields (fur design), n-vector fields (n-field design and hatching/line-art design), and tensor fields (curvature fields smoothing) and show that the computation time decreases up to two orders of magnitude compared to that of the original problem.
Chapter 4 introduces a novel multigrid method for numerically solving the Laplace--Beltrami eigenproblems on a surface mesh. Our new technique, the Hierarchical Subspace Iteration Method (HSIM), works on a hierarchy of nested vector spaces, in which the solution of the coarser level is used as an initial solution on the finer level. We construct the coarsest level such that the eigenproblems can be solved efficiently using a dense eigensolver. On every level, the prolongation operator maps the solution from the coarser to the finer level. The result then can be used as an initialization for subspace iterations to approximate the eigenpairs. This approach significantly reduces the number of iterations at the finest level, compared to the non-hierarchical subspace iteration method. We show that HSIM outperforms the Locally Optimal Block Preconditioned Conjugate Gradient method and the state-of-the-art Lanczos-based eigensolvers, such as Matlab's eigs, Manifold Harmonics, and SpectrA.
In summary, each of the chapters in this thesis proposes efficient algorithms for computing the eigendecompositions of Laplace--Beltrami and Hodge--Laplace operators, mainly using model order reduction and multigrid approaches. These methods reduce computational costs (Chapter 1-3) and storage requirements (Chapter 1-2) for the spectral processing of scalar functions and tangential fields on surface meshes.","geometry processing; spectral methods; model order reduction; Multigrid; Laplace--Beltrami operator; vector fields","en","doctoral thesis","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:9dd9701b-343e-4f25-8d49-02652e839e32","http://resolver.tudelft.nl/uuid:9dd9701b-343e-4f25-8d49-02652e839e32","Technology platform for advanced neurostimulation implants: The “chip-in-tip” DBS probe","Kluba, M.M. (TU Delft Electronic Components, Technology and Materials)","Dekker, R. (promotor); Delft University of Technology (degree granting institution)","2022","The progress in the field of neurostimulation is impressive, both from a technical as well as from a therapeutic point of view. Nowadays, the electrical stimulation of the nervous system can be used to induce or suppress muscle responses. Additionally, it can also influence hearing, vision, immune system response, pain perception, and even mental state. The number of medical conditions that can be treated using existing or completely new neurostimulation devices is continuously growing. Moreover, well-targeted electrical neuromodulation can help reduce the whole-body side effects, typical for traditional medication therapies. However, the potential of neurostimulation therapy is limited by the relatively slow development of the accompanying technologies. Most commercial neurostimulation implants still consist of a pulse generator encapsulated in a bulky titanium case and lengthy extension cords. Moreover, in some cases, such as deep brain stimulation (DBS), the resolution of the stimulation is also an issue that can cause severe side effects. In this thesis work, a technology platform for the manufacturing and packaging of advanced neurostimulation implants has been developed to enable further bioelectronics miniaturization and improve the stimulation resolution. These goals have been achieved in close collaboration with the InForMed project partners involved in finalizing the joint design, preparing the inter-facility fabrication process, and supplying off-the-shelf technology modules…","neurostimulation; directional deep brain stimulator; miniaturization; high-level integration; trench capacitor; high-definition flex-to-rigid; sealable trenches; cavity-BOX; biocompatible flip-chip; flexible interconnects; soft encapsulation; parylene; platinum; ceramics; parylene processing in cleanroom","en","doctoral thesis","","978-94-6384-296-9","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:dfaa4423-5e22-45cd-93e8-e469c7c71255","http://resolver.tudelft.nl/uuid:dfaa4423-5e22-45cd-93e8-e469c7c71255","Migration-aware Network Services with Edge Computing","Mukhopadhyay, Atri (Trinity College Dublin); Iosifidis, G. (TU Delft Embedded Systems); Ruffini, Marco (Trinity College Dublin)","","2022","The development of Multi-access edge computing (MEC) has resulted from the requirement for supporting next generation mobile services, which need high capacity, high reliability and low latency. The key issue in such MEC architectures is to decide which edge nodes will be employed for serving the needs of the different end users. Here, we take a fresh look into this problem by focusing on the minimization of migration events rather than focusing on maximizing usage of resources. This is important because service migrations can create significant service downtime to applications that need low latency and high reliability, in addition to increasing traffic congestion in the underlying network. This paper introduces a priority induced service migration minimization (PrISMM) algorithm, which aims at minimizing service migration for both high and low priority services, through the use of Markov decision process, learning automata and combinatorial optimization. We carry out extensive simulations and produce results showing its effectiveness in reducing the mean service downtime of lower priority services and the mean admission time of the higher priority services.","Costs; generalized assignment problem; learning automata; markov decision process; Markov processes; Minimization; multi-access edge computing; Passive optical networks; Resource management; Servers; service migration.; Task analysis","en","journal article","","","","","","","","","","","Embedded Systems","","",""
"uuid:ebf20e65-b503-4eb7-8116-ee59d6495bd6","http://resolver.tudelft.nl/uuid:ebf20e65-b503-4eb7-8116-ee59d6495bd6","Advances in Magnetics Roadmap on Spin-Wave Computing","Blanter, Y.M. (TU Delft QN/Quantum Nanoscience; TU Delft QN/Blanter Group); Carmiggelt, J.J. (TU Delft QN/Quantum Nanoscience; TU Delft QN/vanderSarlab); Cotofana, S.D. (TU Delft Quantum & Computer Engineering; TU Delft Computer Engineering); Hamdioui, S. (TU Delft Quantum & Computer Engineering); Nikitin, A. A. (Saint Petersburg Electrotechnical University LETI); Reimann, T. (Innovent e.V.); Sharma, S. (Max Planck Institute for the Science of Light); van der Sar, T. (TU Delft QN/Quantum Nanoscience; TU Delft QN/vanderSarlab); Zhang, X. (Argonne National Laboratory)","","2022","Magnonics addresses the physical properties of spin waves and utilizes them for data processing. Scalability down to atomic dimensions, operation in the GHz-to-THz frequency range, utilization of nonlinear and nonreciprocal phenomena, and compatibility with CMOS are just a few of many advantages offered by magnons. Although magnonics is still primarily positioned in the academic domain, the scientific and technological challenges of the field are being extensively investigated, and many proof-of-concept prototypes have already been realized in laboratories. This roadmap is a product of the collective work of many authors, which covers versatile spin-wave computing approaches, conceptual building blocks, and underlying physical phenomena. In particular, the roadmap discusses the computation operations with the Boolean digital data, unconventional approaches, such as neuromorphic computing, and the progress toward magnon-based quantum computing. This article is organized as a collection of sub-sections grouped into seven large thematic sections. Each sub-section is prepared by one or a group of authors and concludes with a brief description of current challenges and the outlook of further development for each research direction.","computing; data processing; Logic gates; Magnetic domains; magnon; magnonics; Magnonics; Nanoscale devices; Physics; Quantum computing; Spin wave; Three-dimensional displays","en","journal article","","","","","","","","","","QN/Quantum Nanoscience","QN/Blanter Group","","",""
"uuid:93fdc1a2-740b-4908-8006-be3f1ebbacc1","http://resolver.tudelft.nl/uuid:93fdc1a2-740b-4908-8006-be3f1ebbacc1","Robust Optimal Control for Demand Side Management of Multi-Carrier Microgrids","Carli, Raffaele (University of Bari); Cavone, Graziana (University of Bari); Pippia, T.M. (TU Delft Team Tamas Keviczky); De Schutter, B.H.K. (TU Delft Team Bart De Schutter); Dotoli, Mariagrazia (University of Bari)","","2022","This paper focuses on the control of microgrids where both gas and electricity are provided to the final customer, i.e., multi-carrier microgrids. Hence, these microgrids include thermal and electrical loads, renewable energy sources, energy storage systems, heat pumps, and combined heat and power units. The parameters characterizing the multi-carrier microgrid are subject to several disturbances, such as fluctuations in the provision of renewable energy, variability in the electrical and thermal demand, and uncertainties in the electricity and gas pricing. With the aim of accounting for the data uncertainties in the microgrid, we propose a Robust Model Predictive Control (RMPC) approach whose goal is to minimize the total economical cost, while satisfying comfort and energy requests of the final users. In the related literature various RMPC approaches have been proposed, focusing either on electrical or on thermal microgrids. Only a few contributions have addressed the robust control of multi-carrier microgrids. Consequently, we propose an innovative RMPC algorithm that employs on an uncertainty set-based method and that can provide better performance compared with deterministic model predictive controllers applied to multi-carrier microgrids. With the aim of mitigating the conservativeness of the approach, we define suitable robustness factors and we investigate the effects of such factors on the robustness of the solution against variations of the uncertain parameters. We show the effectiveness of the proposed RMPC approach by applying it to a realistic residential multi-carrier microgrid and comparing the obtained results with the ones of a baseline robust method.","demand side management (DSM); Energy and environment-aware automation; Heat pumps; Microgrids; multi-carrier microgrid; Renewable energy sources; Resistance heating; robust model predictive control.; robust optimization; Robustness; set-based uncertainty; Stochastic processes; Uncertainty","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Team Bart De Schutter","","",""
"uuid:7d9d451c-9d31-4154-96da-87ed09e688df","http://resolver.tudelft.nl/uuid:7d9d451c-9d31-4154-96da-87ed09e688df","Critical appraisal of technologies to assess electrical activity during atrial fibrillation: a position paper from the European Heart Rhythm Association and European Society of Cardiology Working Group on eCardiology in collaboration with the Heart Rhythm Society, Asia Pacific Heart Rhythm Society, Latin American Heart Rhythm Society and Computing in Cardiology","de Groot, N.M.S. (TU Delft Signal Processing Systems; TU Delft Biomechanical Engineering; Erasmus MC); Shah, Dipen (University Hospital of Geneva); Boyle, Patrick M. (University of Washington); Anter, Elad (Cleveland Clinic Foundation); Clifford, Gari D. (Emory University); Deisenhofer, Isabel (German Heart Center Munich and Technical University of Munich); van Dessel, Pascal (Medisch Spectrum Twente); Dilaveris, Polychronis (National and Capodistrian University of Athens); van der Veen, A.J. (TU Delft Signal Processing Systems; Circuits and Systems (CAS) Group)","","2022","We aim to provide a critical appraisal of basic concepts underlying signal recording and processing technologies applied for (i) atrial fibrillation (AF) mapping to unravel AF mechanisms and/or identifying target sites for AF therapy and (ii) AF detection, to optimize usage of technologies, stimulate research aimed at closing knowledge gaps, and developing ideal AF recording and processing technologies. Recording and processing techniques for assessment of electrical activity during AF essential for diagnosis and guiding ablative therapy including body surface electrocardiograms (ECG) and endo- or epicardial electrograms (EGM) are evaluated. Discussion of (i) differences in uni-, bi-, and multi-polar (omnipolar/Laplacian) recording modes, (ii) impact of recording technologies on EGM morphology, (iii) global or local mapping using various types of EGM involving signal processing techniques including isochronal-, voltage- fractionation-, dipole density-, and rotor mapping, enabling derivation of parameters like atrial rate, entropy, conduction velocity/direction, (iv) value of epicardial and optical mapping, (v) AF detection by cardiac implantable electronic devices containing various detection algorithms applicable to stored EGMs, (vi) contribution of machine learning (ML) to further improvement of signals processing technologies. Recording and processing of EGM (or ECG) are the cornerstones of (body surface) mapping of AF. Currently available AF recording and processing technologies are mainly restricted to specific applications or have technological limitations. Improvements in AF mapping by obtaining highest fidelity source signals (e.g. catheter-electrode combinations) for signal processing (e.g. filtering, digitization, and noise elimination) is of utmost importance. Novel acquisition instruments (multi-polar catheters combined with improved physical modelling and ML techniques) will enable enhanced and automated interpretation of EGM recordings in the near future.","Atrial fibrillation; Cardiac implantable electronic devices; EHRA position paper; Machine learning; Mapping; Signal processing; Signal recording","en","journal article","","","","","","","","","","Biomechanical Engineering","Signal Processing Systems","","",""
"uuid:661d44d7-1ff5-4eed-ba47-62f6c128c4a1","http://resolver.tudelft.nl/uuid:661d44d7-1ff5-4eed-ba47-62f6c128c4a1","Three Symmetries for Data-Driven Pedestrian Inertial Navigation","Wahlstrom, Johan (University of Exeter); Kok, M. (TU Delft Team Manon Kok)","","2022","The last years have seen a growing body of literature on data-driven pedestrian inertial navigation. However, despite this, it is still unclear how to efficiently combine classical models and other a priori information with existing machine learning frameworks. In this paper, we first categorize existing approaches to data-driven pedestrian inertial navigation, including approaches where a machine learning algorithm is embedded into an overarching classical framework and purely data-driven frameworks. We then propose an estimation framework where navigation estimates obtained by classical means are fed to a machine learning algorithm which is trained to correct and improve the estimates. Further, we describe three symmetries that can be used to constrain the proposed estimation framework and thereby improve its performance. These are 1) the rotational symmetry of pedestrian dynamics, 2) the rotational symmetry of the sensors, and 3) the temporal symmetry of pedestrian dynamics. To demonstrate the usefulness of the proposed framework, we use data from foot-mounted inertial sensors utilizing zero-velocity updates under mixed walking and running. Machine learning corrections are implemented using both neural networks and Gaussian processes.","Estimation; Gaussian processes; Inertial navigation; Kinematics; Machine learning; Machine learning algorithms; Mathematical models; Measurement uncertainty; neural networks; pedestrian navigation","en","journal article","","","","","","","","","","","Team Manon Kok","","",""
"uuid:229954c6-0852-4fb3-a21f-dc922d9bc5b1","http://resolver.tudelft.nl/uuid:229954c6-0852-4fb3-a21f-dc922d9bc5b1","Silicon in hot metal from a blast furnace, the role of FeO","Hage, J. L.T. (Tata Steel); van der Stel, J. (Tata Steel); Yang, Y. (TU Delft Team Yongxiang Yang)","","2022","Silicon [Si] in hot metal is an impurity acting in the steel shop as an energy source that is released by oxidation during oxygen blowing in the converter. Preferred silicon concentration in hot steel is typically 0.4 wt-% (± 0.1 wt-%), helping predictable and low-cost processing. In practice, the Si concentration is difficult to control and may occasionally reach levels higher than 1 wt-%. The authors studied data from observations, samples and autopsies of a chilled blast furnace, a core drill and a furnace in operation. In addition, production data from a melting reduction pilot plant (HIsarna) and FactSage® calculations were used. The amount of FeO in the raceway, the area in which hot gas and powdered coal (PCI) are introduced in the blast furnace, in relation to [Si] in hot metal, is observed. The goal of this paper is to contribute to a better understanding about the mechanism of the dissolution of silicon in hot metal.","Blast furnace; FeO; Hot metal quality; Process control; Raceway; Si-control; Silicon; SiO","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Team Yongxiang Yang","","",""
"uuid:66a72d1a-2ac5-45ff-8cef-69ee6efab42b","http://resolver.tudelft.nl/uuid:66a72d1a-2ac5-45ff-8cef-69ee6efab42b","Computational Array Signal Processing via Modulo Non-Linearities","Fernandez-Menduina, Samuel (Imperial College London); Krahmer, Felix (Technische Universität München); Leus, G.J.T. (TU Delft Signal Processing Systems); Bhandari, Ayush (Imperial College London)","","2022","Conventional literature on array signal processing (ASP) is based on the ""capture first, process"" later philosophy and to this end, signal processing algorithms are typically decoupled from the hardware. This poses fundamental limitations because if the sensors result in information loss, the algorithms may no longer be able to achieve their guaranteed performance. In this paper, our goal is to overcome the barrier of information loss via sensor saturation and clipping. This is a significant problem in application areas including physiological monitoring and extra-terrestrial exploration where the amplitudes may be unknown or larger than the dynamic range of the sensor. To overcome this fundamental bottleneck, we propose ""computational arrays"" which are based on a co-design approach so that a collaboration between the sensor array hardware and algorithms can be harnessed. Our work is inspired by the recently introduced unlimited sensing framework. In this context, our computational arrays encode the high-dynamic-range information by folding the signal amplitudes, thus introducing a new form of information loss in terms of the modulo measurements. On the decoding front, we develop mathematically guaranteed recovery algorithms for spatio-temporal array signal processing tasks that include DoA estimation, beamforming and signal reconstruction. Numerical examples corroborate the applicability of our approach and pave a path for the development of novel computational arrays for ASP.","Array signal processing; direction of arrival (DOA) estimation; multi-channel sampling; Non-linear sensing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-06-14","","","Signal Processing Systems","","",""
"uuid:4c554dce-7ba9-4e6b-a02d-48a70f20c224","http://resolver.tudelft.nl/uuid:4c554dce-7ba9-4e6b-a02d-48a70f20c224","Photoprocessing of H2S on dust grains Building S chains in translucent clouds and comets: Building S chains in translucent clouds and comets","Cazaux, S.M. (TU Delft Astrodynamics & Space Missions; Universiteit Leiden); Carrascosa, H. (Centro de Astrobiología (INTA-CSIC)); Muñoz Caro, G. M. (Centro de Astrobiología (INTA-CSIC)); Caselli, P. (Max Planck Institute for Extraterrestrial Physics Garching); Fuente, A. (Observatorio Astronómico Nacional (OAN)); Navarro-Almaida, D. (Observatorio Astronómico Nacional (OAN)); Riviére-Marichalar, P. (Observatorio Astronómico Nacional (OAN))","","2022","Context. Sulfur is a biogenic element used as a tracer of the evolution of interstellar clouds to stellar systems. However, most of the expected sulfur in molecular clouds remains undetected. Sulfur disappears from the gas phase in two steps. The first depletion occurs during the translucent phase, reducing the gas-phase sulfur by 7-40 times, while the following freeze-out step occurs in molecular clouds, reducing it by another order of magnitude. This long-standing question awaits an explanation. Aims. The aim of this study is to understand under what form the missing sulfur is hiding in molecular clouds. The possibility that sulfur is depleted onto dust grains is considered. Methods. Experimental simulations mimicking HS ice UV photoprocessing in molecular clouds were conducted at 8 K under ultra-high vacuum. The ice was subsequently warmed up to room temperature. The ice was monitored using infrared spectroscopy, and the desorbing molecules were measured by quadrupole mass spectrometry in the gas phase. Theoretical Monte Carlo simulations were performed for interpretation of the experimental results and extrapolation to the astrophysical and planetary conditions. Results. HS formation was observed during irradiation at 8 K. Molecules HS x with x > 2 were also identified and found to desorb during warm-up, along with S to S 4 species. Larger S x molecules up to S 8 are refractory at room temperature and remained on the substrate forming a residue. Monte Carlo simulations were able to reproduce the molecules desorbing during warming up, and found that residues are chains of sulfur consisting of 6-7 atoms. Conclusions. Based on the interpretation of the experimental results using our theoretical model, it is proposed that S + in translucent clouds contributes notoriously to S depletion in denser regions by forming long S chains on dust grains in a few times 10 4 yr. We suggest that the S to S 4 molecules observed in comets are not produced by fragmentation of these large chains. Instead, they probably come either from UV photoprocessing of HS-bearing ice produced in molecular clouds or from short S chains formed during the translucent cloud phase.","Astrochemistry; Comets: general; ISM: abundances; ISM: clouds; ISM: molecules; Molecular processes","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Astrodynamics & Space Missions","","",""
"uuid:da7b481b-1fe1-4217-a3e4-d8e9c2dd6d56","http://resolver.tudelft.nl/uuid:da7b481b-1fe1-4217-a3e4-d8e9c2dd6d56","Machine learning based optimization for interval uncertainty propagation","Cicirello, A. (TU Delft Mechanics and Physics of Structures); Giunta, F. (TU Delft Mechanics and Physics of Structures)","","2022","Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems described by expensive-to-evaluate deterministic computer models with parameters defined as interval variables. These approaches employ a machine learning based optimization strategy, the so-called Bayesian optimization, for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The lack of knowledge caused by not evaluating the response function for all the possible combinations of the interval variables is accounted for by developing a probabilistic description of the response variable itself by using a Gaussian Process regression model. An iterative procedure is developed for selecting a small number of simulations to be evaluated for updating this statistical model by using well-established acquisition functions and to assess the response bounds. In both approaches, an initial training dataset is defined. While one approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable, the other one builds iteratively a single training dataset. Consequently, the two approaches will produce different bound estimates at each iteration. The upper and lower response bounds are expressed as point estimates obtained from the mean function of the posterior distribution. Moreover, a confidence interval on each estimate is provided for effectively communicating to engineers when these estimates are obtained at a combination of the interval variables for which no deterministic simulation has been run. Finally, two metrics are proposed to define conditions for assessing if the predicted bound estimates can be considered satisfactory. The applicability of these two approaches is illustrated with two numerical applications, one focusing on vibration and the other on vibro-acoustics.","Bayesian optimization; Bounded uncertainty; Communicating uncertainty; Expensive-to-evaluate deterministic computer models; Gaussian process","en","journal article","","","","","","","","","","","Mechanics and Physics of Structures","","",""
"uuid:0dbcd766-111a-498c-a7c3-b7e6b44deeab","http://resolver.tudelft.nl/uuid:0dbcd766-111a-498c-a7c3-b7e6b44deeab","When Reality Kicks In: Exploring the Influence of Local Context on Community-Based Design","Klerks, Gwen (Eindhoven University of Technology); Slingerland, G. (TU Delft System Engineering); Kalinauskaite, Indre (Eindhoven University of Technology); Brodersen Hansen, Nicolai (Aalborg University); Schouten, Ben (Eindhoven University of Technology)","","2022","Social sustainability is becoming an increasingly important topic in design practice, calling for more contextual perspectives on the process of design for social sustainability. This paper presents a retrospective case study analyzing the design process of a serious game which aimed to empower teenagers to organize events to strengthen community bonds. The community context in which the collaborative project took place underwent significant contextual changes due to the COVID-19 pandemic. Analysis using the Ecologies of Contestation framework shows the influence of multiple contextual levels (Socio-cultural, Power, Constructed, and Values-based) on the design process. Moreover, the paper discusses multiple contextual factors which influenced the design process and presents four suggestions for designers to anticipate and benefit from dynamics in these contextual elements. The suggestions regard (1) integrating the temporal dimension in the collaborative design processes, (2) carefully considering (value) alignment between actors, (3) leveraging values in the collaborative design process, and (4) acknowledging and responding to the multilayered nature of communities throughout the design process. As such, this paper explores the relationships between the community context and the collaborative design process to contribute to more resilient design practices.","design process; civic communities; co-design; context dynamics; digital civics; community-based design","en","journal article","","","","","","","","","","","System Engineering","","",""
"uuid:79d6e790-bbfe-4760-9d9c-2dfe36be9560","http://resolver.tudelft.nl/uuid:79d6e790-bbfe-4760-9d9c-2dfe36be9560","Adaptive Reuse of Heritage Buildings: From a Literature Review to a Model of Practice","Arfa, F. (TU Delft Heritage & Technology); Zijlstra, H. (TU Delft Heritage & Design); Lubelli, B. (TU Delft Heritage & Technology); Quist, W.J. (TU Delft Heritage & Technology)","","2022","The Adaptive Reuse (AR) of heritage buildings is a complex process, which aims to preserve the values of heritage buildings while adapting them for use in the present and transferring them to the future. This paper aims to identify steps in this process and develop a structured model. The model is an ‘ideal’, it needs validation in practice; however, it is expected that following this model can help to preserve and conserve the values of heritage buildings. To come to an overview of the process and to identify its main steps, a literature review at an international level has been conducted. The analysis of the literature revealed that the AR process as a whole in relation to heritage buildings has not been widely studied. Based on the results of this review, a conceptual model representing the AR process of heritage buildings has been defined. This model consists of 10 steps: ‘initiative’, ‘analysis of heritage buildings’, ‘value assessment, ‘mapping level of significance’, ‘definition of adaptive reuse potential’, ‘definition of design strategy’, ‘final decision-making’, ‘execution’, ‘maintenance’, and ‘evaluation after years’. This model can act as a comprehensive theoretical basis for further studies on the AR process of heritage buildings.","Adaptive reuse; process; heritage buildings; built environment; built heritage; conservation; sustainable development; literature review; model","en","journal article","","","","","","","","","","","Heritage & Technology","","",""
"uuid:a8f21b3a-6156-442e-9a3a-6a7a872955dc","http://resolver.tudelft.nl/uuid:a8f21b3a-6156-442e-9a3a-6a7a872955dc","Impact of occupational risk prevention measures during process disturbances in TBM tunnelling","Terheijden, O. T. (Paltrock); van Gelder, P.H.A.J.M. (TU Delft Safety and Security Science); Broere, W. (TU Delft Geo-engineering)","","2022","When process disturbances occur, workers on the Tunnel Boring Machine (TBM) will need to operate outside safe zones, reducing or eliminating the safety barrier ‘distance’ between them and potential sources of risk. Consequently, disturbances have a higher risk potential than regular TBM operations. By comparing the risks of registered process disturbances with regular TBM process, we try to predict accident scenarios. The exposure risk is defined by the exposure time and the injury severity. Exposure times have been determined from case histories, where on average 11% of the construction period is attributed to disturbances. The potential number of casualties, including less common incident scenarios, have been determined using an accident scenario building toolkit. We find that factors that contribute most to occupational risk reduction are the (correct) use of available risk prevention measures, correct design of safety barriers and making these barriers available to personnel, as well as detailed planning of procedures such that specific tasks are performed in a uniform and predetermined manner.","Accident scenario's; Construction industry; Occupational risk; Process disturbances; Storybuilder; TBM tunnelling","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Safety and Security Science","","",""
"uuid:9010658f-c9ee-4791-9298-ec15b7734114","http://resolver.tudelft.nl/uuid:9010658f-c9ee-4791-9298-ec15b7734114","Process Intensification as Game Changer in Enzyme Catalysis","Burek, Bastien O. (DECHEMA Research Institute); Dawood, A.W.H. (Hamburg University of Technology); Hollmann, F. (TU Delft BT/Biocatalysis); Liese, Andreas (Hamburg University of Technology); Holtmann, Dirk (Technische Hochschule Mittelhessen, Giessen)","","2022","Enzyme catalysis, made tremendous progress over the last years in identification of new enzymes and new enzymatic reactivity’s as well as optimization of existing enzymes. However, the performance of the resulting processes is often still limited, e.g., in regard of productivity, realized product concentrations and the stability of the enzymes. Different topics (like limited specific activity, unfavourable kinetics or limited enzyme stability) can be addressed via enzyme engineering. On the other hand, there is also a long list of topics that are not addressable by enzyme engineering. Here typical examples are unfavourable reaction thermodynamics, selectivity in multistep reactions or low water solubility. These challenges can only be addressed through an adaption of the reaction system. The procedures of process intensification (PI) represent a good approach to reach most suitable systems. The general objective of PI is to achieve significant benefits in terms of capital and operating costs as well as product quality, waste, and process safety by applying innovative principles. The aim of the review is to show the current capabilities and future potentials of PI in enzyme catalysis focused on enzymes of the class of oxidoreductases. The focus of the paper is on alternative methods of energy input, innovative reactor concepts and reaction media with improved properties.","biocatalysis; process intensification; energy input in biocatalysis; reactor design; solvent; electrobiocatalysis","en","journal article","","","","","","","","","","","BT/Biocatalysis","","",""
"uuid:65257245-10fa-4487-8d37-d01b1fc36220","http://resolver.tudelft.nl/uuid:65257245-10fa-4487-8d37-d01b1fc36220","DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point Clouds","Wiersma, R.T. (TU Delft Computer Graphics and Visualisation); Nasikun, A. (TU Delft Computer Graphics and Visualisation); Eisemann, E. (TU Delft Computer Graphics and Visualisation); Hildebrandt, K.A. (TU Delft Computer Graphics and Visualisation)","","2022","Learning from 3D point-cloud data has rapidly gained momentum, motivated by the success of deep learning on images and the increased availability of 3D~data. In this paper, we aim to construct anisotropic convolution layers that work directly on the surface derived from a point cloud. This is challenging because of the lack of a global coordinate system for tangential directions on surfaces. We introduce DeltaConv, a convolution layer that combines geometric operators from vector calculus to enable the construction of anisotropic filters on point clouds. Because these operators are defined on scalar- and vector-fields, we separate the network into a scalar- and a vector-stream, which are connected by the operators. The vector stream enables the network to explicitly represent, evaluate, and process directional information. Our convolutions are robust and simple to implement and match or improve on state-of-the-art approaches on several benchmarks, while also speeding up training and inference.","Point Clouds; Point Cloud Classification; Point Cloud Segmentation; Point Cloud Learning; Point Cloud Processing; geometric deep learning; Graph CNN","en","journal article","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:a57ebb95-11d0-4d77-8d4c-279a6810d972","http://resolver.tudelft.nl/uuid:a57ebb95-11d0-4d77-8d4c-279a6810d972","Fusion of Data from Multiple Automotive Radars for High-Resolution DoA Estimation","Suvarna, Anusha Ravish (NXP Semiconductors); Koppelaar, Arie (NXP Semiconductors); Jansen, Feike (NXP Semiconductors); Wang, J. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2022","High angular resolution is in high demand in automotive radar. To achieve a high azimuth resolution a large aperture antenna array is required. Although MIMO technique can be used to form larger virtual apertures, a large number of transmitter-receiver channels are needed, which is still technologically challenging and costly. To circumvent this problem, we propose a high-resolution Direction of Arrival (DoA) estimation by using multiple small radar sensors distributed on the fascia of the automobile. To exploit the diversity gain due to different target observation angles by different radars, a block Focal Under determined System Solver based approach is proposed to incoherently fuse the data from multiple small MIMO sensors. This method significantly improves the DoA estimation compared to single sensor, decreases probability of false alarm and increases probability of multiple target detection. Its performance is demonstrated through both numerical simulations and experimental results.","Compressive Sensing (CS); FOCUSS; Block sparsity; distributed radar; MIMO; automotive radar; OMP; BOMP; incoherent processing; ambiguity function; single snap-shot; DoA estimation","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-11-03","","","Microwave Sensing, Signals & Systems","","",""
"uuid:bcf14ce5-68d2-49ba-a8ff-b48b9803cd19","http://resolver.tudelft.nl/uuid:bcf14ce5-68d2-49ba-a8ff-b48b9803cd19","Mean Field Behavior of Collaborative Multiagent Foragers","Jarne Ornia, D. (TU Delft Team Manuel Mazo Jr); Zufiria, Pedro J. (Universidad Politécnica de Madrid); Mazo, M. (TU Delft Team Manuel Mazo Jr)","","2022","Collaborative multiagent robotic systems, where agents coordinate by modifying a shared environment often result in undesired dynamical couplings that complicate the analysis and experiments when solving a specific problem or task. Simultaneously, biologically inspired robotics rely on simplifying agents and increasing their number to obtain more efficient solutions to such problems, drawing similarities with natural processes. In this work, we focus on the problem of a biologically inspired multiagent system solving collaborative foraging. We show how mean field techniques can be used to re-formulate such a stochastic multiagent problem into a deterministic autonomous system. This de-couples agent dynamics, enabling the computation of limit behaviors and the analysis of optimality guarantees. Furthermore, we analyse how having finite number of agents affects the performance when compared to the mean field limit and we discuss the implications of such limit approximations in this multiagent system, which have impact on more general collaborative stochastic problems.","Agent-based systems; Collaboration; Convergence; learning and adaptive systems; mean field models; Random variables; Robot kinematics; Stochastic processes; swarms; Task analysis; Trajectory","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-09-07","","","Team Manuel Mazo Jr","","",""
"uuid:d5eb7259-8f15-43dd-a8d1-1ba11bf4c558","http://resolver.tudelft.nl/uuid:d5eb7259-8f15-43dd-a8d1-1ba11bf4c558","On the retrieval of forward-scattered waveforms from acoustic reflection and transmission data with the Marchenko equation","van der Neut, J.R. (TU Delft Applied Geophysics and Petrophysics); Brackenhoff, J.A. (TU Delft Applied Geophysics and Petrophysics; ETH Zürich); Meles, G.A. (TU Delft Applied Geophysics and Petrophysics; University of Lausanne); Zhang, L. (TU Delft Applied Geophysics and Petrophysics; China University of Geosciences, Wuhan); Slob, E.C. (TU Delft Applied Geophysics and Petrophysics); Wapenaar, C.P.A. (TU Delft Applied Geophysics and Petrophysics)","","2022","A Green's function in an acoustic medium can be retrieved from reflection data by solving a multidimensional Marchenko equation. This procedure requires a priori knowledge of the initial focusing function, which can be interpreted as the inverse of a transmitted wavefield as it would propagate through the medium, excluding (multiply) reflected waveforms. In practice, the initial focusing function is often replaced by a time-reversed direct wave, which is computed with help of a macro velocity model. Green's functions that are retrieved under this (direct-wave) approximation typically lack forward-scattered waveforms and their associated multiple reflections. We examine whether this problem can be mitigated by incorporating transmission data. Based on these transmission data, we derive an auxiliary equation for the forward-scattered components of the initial focusing function. We demonstrate that this equation can be solved in an acoustic medium with mass density contrast and constant propagation velocity. By solving the auxiliary and Marchenko equation successively, we can include forward-scattered waveforms in our Green's function estimates, as we demonstrate with a numerical example.","acoustic propagation; acoustic signal processing; Acoustic waves; Acoustics; Computational modeling; Focusing; Frequency control; Green's function methods; Mathematical models; Windows","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-11-28","","","Applied Geophysics and Petrophysics","","",""
"uuid:e3d9cf10-922f-4f17-bda7-30b47876854f","http://resolver.tudelft.nl/uuid:e3d9cf10-922f-4f17-bda7-30b47876854f","Contour Method with Uncertainty Quantification: A Robust and Optimised Framework via Gaussian Process Regression","Tognan, A. (Università degli Studi di Udine); Laurenti, L. (TU Delft Team Luca Laurenti); Salvati, E. (Università degli Studi di Udine)","","2022","Background: Over the past 20 years, the Contour Method (CM) has been extensively implemented to evaluate residual stress at the macro scale, especially in products where material processing is involved. Despite this, insufficient attention has been devoted to addressing the problems of input data filtering and residual stress uncertainties quantification. Objective: The present research aims to tackle this fundamental issue by combining Gaussian Process Regression (GPR) with the CM. Thanks to its stochastic nature, GPR associates a Gaussian distribution with every subset of data, thus holding the potential to model the inherent uncertainty of the input data set and to link it to the measurements and the surface roughness. Methods: The conventional and unrobust spline smoothing process is effectively replaced by the GPR which is capable of providing uncertainties over the fitting. Indeed, the GPR stochastically and automatically identifies the fitting parameter, thus making the experimental data post-processing practically unaffected by the user’s experience. Moreover, the final residual stress uncertainty is efficiently evaluated through an optimised Monte Carlo Finite Element simulation, by appropriately perturbing the input dataset according to the GPR predictions. Results: The simulation is globally optimised exploiting numerical techniques, such as LU-factorisation, and developing an on-line convergence criterion. In order to show the capability of the presented approach, a Friction Stir Welded plate is considered as a case study. For this problem, it was shown how residual stress and its uncertainty can be accurately evaluated in approximately 15 minutes using a low-budget personal computer. Conclusions: The method developed herein overcomes the key limitation of the standard spline smoothing approach and this provides a robust and optimised computational framework for routinely evaluating the residual stress and its associated uncertainty. The implications are very significant as the evaluation accuracy of the CM is now taken to a higher level.","Aluminium Alloy; Contour Method; Friction Stir Welding; Gaussian Process Regression; Uncertainty Quantification","en","journal article","","","","","","","","","","","Team Luca Laurenti","","",""
"uuid:f51c6222-6f23-4076-99a1-72ae267ad502","http://resolver.tudelft.nl/uuid:f51c6222-6f23-4076-99a1-72ae267ad502","Convolutional Filtering in Simplicial Complexes","Isufi, E. (TU Delft Multimedia Computing); Yang, M. (TU Delft Multimedia Computing)","","2022","This paper proposes convolutional filtering for data whose structure can be modeled by a simplicial complex (SC). SCs are mathematical tools that not only capture pairwise relationships as graphs but account also for higher-order network structures. These filters are built by following the shift-and-sum principle of the convolution operation and rely on the Hodge-Laplacians to shift the signal within the simplex. But since in SCs we have also inter-simplex coupling, we use the incidence matrices to transfer the signal in adjacent simplices and build a filter bank to jointly filter signals from different levels. We prove some interesting properties for the proposed filter bank, including permutation and orientation equivariance, a computational complexity that is linear in the SC dimension, and a spectral interpretation using the simplicial Fourier transform. We illustrate the proposed approach with numerical experiments.","Hodge Laplacian; simplicial filter; topological signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Multimedia Computing","","",""
"uuid:64dca9cc-df52-4a15-a0c8-b53538c619fa","http://resolver.tudelft.nl/uuid:64dca9cc-df52-4a15-a0c8-b53538c619fa","Unlocking the Flexibility of District Heating Pipeline Energy Storage with Reinforcement Learning","Stepanovic, K. (TU Delft Algorithmics); Wu, J. (TU Delft Algorithmics; Flex Technologies); Everhardt, Rob (Flex Technologies); de Weerdt, M.M. (TU Delft Algorithmics)","","2022","The integration of pipeline energy storage in the control of a district heating system can lead to profit gain, for example by adjusting the electricity production of a combined heat and power (CHP) unit to the fluctuating electricity price. The uncertainty from the environment, the computational complexity of an accurate model, and the scarcity of placed sensors in a district heating system make the operational use of pipeline energy storage challenging. A vast majority of previous works determined a control strategy by a decomposition of a mixed-integer nonlinear model and significant simplifications. To mitigate consequential stability, feasibility, and computational complexity challenges, we model CHP economic dispatch as a Markov decision process. We use a reinforcement learning (RL) algorithm to estimate the system’s dynamics through interactions with the simulation environment. The RL approach is compared with a detailed nonlinear mathematical optimizer on day-ahead and real-time electricity markets and two district heating grid models. The proposed method achieves moderate profit impacted by environment stochasticity. The advantages of the RL approach are reflected in three aspects: stability, feasibility, and time scale flexibility. From this, it can be concluded that RL is a promising alternative for real-time control of complex, nonlinear industrial systems.","4th generation district heating; combined heat and power economic dispatch; Markov decision process; mixed-integer nonlinear program; pipeline energy storage; Q-learning","en","journal article","","","","","","","","","","","Algorithmics","","",""
"uuid:b0549d75-2d52-4ed5-8944-1a815477626e","http://resolver.tudelft.nl/uuid:b0549d75-2d52-4ed5-8944-1a815477626e","Near real-time detection of blockages in the proximity of combined sewer overflows using evolutionary ANNs and statistical process control","Rosin, T. R. (University of Exeter); Kapelan, Z. (TU Delft Sanitary Engineering; University of Exeter); Keedwell, E. (University of Exeter); Romano, M. (United Utilities)","","2022","Blockages are a major issue for wastewater utilities around the world, causing loss of service, environmental pollution, and significant cleanup costs. Increasing telemetry in combined sewer overflows (CSOs) provides the opportunity for near real-time data-driven modelling of wastewater networks. This paper presents a novel methodology, designed to detect blockages and other unusual events in the proximity of CSO chambers in near real-time. The methodology utilises an evolutionary artificial neural network (EANN) model for short-term CSO level predictions and statistical process control (SPC) techniques to analyse unusual level behaviour. The methodology was evaluated on historic blockage events from several CSOs in the UK and was demonstrated to detect blockage events quickly and reliably, with a low number of false alarms.","blockage detection; combined sewer overflow; evolutionary artificial neural network; radar rainfall nowcasts; statistical process control","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:b423776c-458c-4c16-8abe-43ef2dbd3241","http://resolver.tudelft.nl/uuid:b423776c-458c-4c16-8abe-43ef2dbd3241","Low Complex Accurate Multi-Source RTF Estimation","Li, C. (TU Delft Signal Processing Systems); Martinez, Jorge (TU Delft Electrical Engineering Education); Hendriks, R.C. (TU Delft Signal Processing Systems)","","2022","Many multi-microphone algorithms depend on knowing the relative acoustic transfer functions (RTFs) of the individual sound sources in the acoustic scene. However, accurate joint RTF estimation for multiple sources is a challenging problem. Existing methods to jointly estimate the RTF for multiple sources have either no satisfying performance, or, suffer from a very large computational complexity. In this paper, we propose a method for robust estimation of the individual RTFs in a multi-source acoustic scenario. The presented algorithm is based on linear algebraic concepts and therefore of lower computational complexity compared to a recently presented state-of-the-art algorithm, while having a similar performance. Experimental results are presented to demonstrate the RTF estimation performance as well as the noise reduction performance when combining the estimated RTFs with a beamformer.","Joint diagonalization; microphone array signal processing; source separation; RTF estimation; speech enhancement","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Signal Processing Systems","","",""
"uuid:d0245518-921d-47ff-9df5-12d7108f93d3","http://resolver.tudelft.nl/uuid:d0245518-921d-47ff-9df5-12d7108f93d3","Toward Linguistic Recognition of Generalized Anxiety Disorder","Rook, L. (TU Delft Economics of Technology and Innovation); Mazza, M.C. (TU Delft Education and Student Affairs); Lefter, I. (TU Delft System Engineering); Brazier, F.M. (TU Delft System Engineering)","","2022","b>Background: Generalized anxiety disorder (GAD) refers to extreme, uncontrollable, and persistent worry and anxiety. The disorder is known to affect the social functioning and well-being of millions of people, but despite its prevalence and burden to society, it has proven difficult to identify unique behavioral markers. Interestingly, the worrying behavior observed in GAD is argued to stem from a verbal linguistic process. Therefore, the aim of the present study was to investigate if GAD can be predicted from the language people use to put their anxious worries into words. Given the importance of avoidance sensitivity (a higher likelihood to respond anxiously to novel or unexpected triggers) in GAD, this study also explored if prediction accuracy increases when individual differences in behavioral avoidance and approach sensitivity are taken into account.
Method: An expressive writing exercise was used to explore whether GAD can be predicted from linguistic characteristics of written narratives. Specifically, 144 undergraduate student participants were asked to recall an anxious experience during their university life, and describe this experience in written form. Clinically validated behavioral measures for GAD and self-reported sensitivity in behavioral avoidance/inhibition (BIS) and behavioral approach (BAS), were collected. A set of classification experiments was performed to evaluate GAD predictability based on linguistic features, BIS/BAS scores, and a concatenation of the two.
Results: The classification results show that GAD can, indeed, be successfully predicted from anxiety-focused written narratives. Prediction accuracy increased when differences in BIS and BAS were included, which suggests that, under those conditions, negatively valenced emotion words and words relating to social processes could be sufficient for recognition of GAD.
Conclusions: Undergraduate students with a high GAD score can be identified based on their written recollection of an anxious experience during university life. This insight is an important first step toward development of text-based digital health applications and technologies aimed at remote screening for GAD. Future work should investigate the extent to which these results uniquely apply to university campus populations or generalize to other demographics.","generalized anxiety disorder; mental distress; emotion regulation; natural language processing; BIS/BAS","en","journal article","","","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:351d390a-fb2f-4e99-97a8-a4f0b2083dd0","http://resolver.tudelft.nl/uuid:351d390a-fb2f-4e99-97a8-a4f0b2083dd0","Making Green Work: Implementation Strategies in a New Generation of Urban Forests","Muñoz Sanz, V. (TU Delft Urban Design); Romero Muñoz, Sara (Universidad Politécnica de Madrid); Sánchez Chaparro, Teresa (Universidad Politécnica de Madrid); Bello Gómez, Lorena (Harvard University); Herdt, T. (TU Delft Urban Design)","","2022","The concept of “urban forest” (UF) is gaining momentum in urban planning in the context of climate adaptation. Principles from the field of urban forestry are mainstreamed into urban planning, but little is known about effective tools for the successful implementation of new UFs. This article presents explorative research comparing how three cities (Almere, Madrid, and Boston) are dealing with the planning of a UF project, and their alignment with distinct organisational and typological interpretations of a UF. We employed a mixed-methods approach to gain insights into the main goals of the project, their organisational structure, and the employed planning process through the analysis of project documents and expert interviews. Our results point to an effective mainstreaming of environmental questions among stakeholders, but also indicate a poor development of objective criteria for the success of a UF. We note that municipal planners circumvented current internal rigidities and barriers by relying on intermediaries and local academia as providers of external knowledge, or by facilitating experiments. Finally, our results show that there may not be just one UF type to achieve the desired environmental and social goals and overcome implementation barriers. Conversely, each of the governance and organisational models behind the implementation of each type present collaborative and mainstreaming challenges. Therefore, we see an opportunity in further research examining processes and institutions towards the collaborative building of UFs that could bridge gaps between top-down and bottom-up approaches and activate different types of agencies.","climate adaptation; mainstreaming; planning process; urban forestry; urban greening","en","journal article","","","","","","","","","","","Urban Design","","",""
"uuid:7cdb9191-06f4-4129-87a1-3574dfee535e","http://resolver.tudelft.nl/uuid:7cdb9191-06f4-4129-87a1-3574dfee535e","Formal Control Synthesis for Stochastic Neural Network Dynamic Models","Adams, S.J.L. (TU Delft Team Luca Laurenti); Lahijanian, Morteza (University of Colorado); Laurenti, L. (TU Delft Team Luca Laurenti)","","2022","Neural networks (NNs) are emerging as powerful tools to represent the dynamics of control systems with complicated physics or black-box components. Due to complexity of NNs, however, existing methods are unable to synthesize complex behaviors with guarantees for NN dynamic models (NNDMs). This letter introduces a control synthesis framework for stochastic NNDMs with performance guarantees. The focus is on specifications expressed in linear temporal logic interpreted over finite traces (LTLf), and the approach is based on finite abstraction. Specifically, we leverage recent techniques for convex relaxation of NNs to formally abstract a NNDM into an interval Markov decision process (IMDP). Then, a strategy that maximizes the probability of satisfying a given specification is synthesized over the IMDP and mapped back to the underlying NNDM. We show that the process of abstracting NNDMs to IMDPs reduces to a set of convex optimization problems, hence guaranteeing efficiency. We also present an adaptive refinement procedure that makes the framework scalable. On several case studies, we illustrate that our framework is able to provide non-trivial guarantees of correctness for NNDMs with architectures of up to 5 hidden layers and hundreds of neurons per layer.","Formal methods; interval Markov decision processes; neural networks; switched systems; synthesis","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Team Luca Laurenti","","",""
"uuid:2d6ec311-a73e-4fb1-9059-fd8eb1c6fc3e","http://resolver.tudelft.nl/uuid:2d6ec311-a73e-4fb1-9059-fd8eb1c6fc3e","EdgeNets: Edge Varying Graph Neural Networks","Isufi, E. (TU Delft Multimedia Computing); Gama, Fernando (University of Pennsylvania); Ribeiro, Alejandro (University of Pennsylvania)","","2022","Driven by the outstanding performance of neural networks in the structured euclidean domain, recent years have seen a surge of interest in developing neural networks for graphs and data supported on graphs. The graph is leveraged at each layer of the neural network as a parameterization to capture detail at the node level with a reduced number of parameters and computational complexity. Following this rationale, this paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet. An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors. By extrapolating this strategy to more iterations between neighboring nodes, the EdgeNet learns edge- and neighbor-dependent weights to capture local detail. This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs). In writing different GNN architectures with a common language, EdgeNets highlight specific architecture advantages and limitations, while providing guidelines to improve their capacity without compromising their local implementation. For instance, we show that GCNNs have a parameter sharing structure that induces permutation equivariance. This can be an advantage or a limitation, depending on the application. In cases where it is a limitation, we propose hybrid approaches and provide insights to develop several other solutions that promote parameter sharing without enforcing permutation equivariance. Another interesting conclusion is the unification of GCNNs and GATs - approaches that have been so far perceived as separate. In particular, we show that GATs are GCNNs on a graph that is learned from the features. This particularization opens the doors to develop alternative attention mechanisms for improving discriminatory power.","Edge varying; graph neural networks; graph signal processing; graph filters; learning on graphs","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Multimedia Computing","","",""
"uuid:a1e8dbe3-a6cc-416c-bb0d-4e6deab20196","http://resolver.tudelft.nl/uuid:a1e8dbe3-a6cc-416c-bb0d-4e6deab20196","A Cascaded Structure for Generalized Graph Filters","Coutino, Mario (TU Delft Signal Processing Systems; TNO); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2022","One of the main challenges of graph filters is the stability of their design. While classical graph filters allow for a stable design using optimal polynomial approximation theory, generalized graph filters tend to suffer from the ill-conditioning of the involved system matrix. This issue, accentuated for increasing graph filter orders, naturally leads to very large (small) filter coefficients or error saturation, casting a shadow on the benefits of these richer graph filter structures. In addition to this, data-driven design/learning of graph filters with large filter orders, even in the case of classical graph filters, suffers from the eigenvalue spread of the input data covariance matrix and mode coupling, leading to convergence-related issues as the ones observed when identifying time-domain filters with large orders. To alleviate these conditioning and convergence problems, and to reduce the overall design complexity, in this work, we propose a cascaded implementation of generalized graph filters and an efficient algorithm for designing the graph filter coefficients in both model- and data-driven settings. Further, we establish the connections of this implementation with so-called graph convolutional neural networks and demonstrate the performance of the proposed structure in different network applications. By the proposed approach, further error reduction and better design stability are achieved.","cascaded filters; distributed optimization; graph filtering; graph signal processing; signal processing on graphs","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-02-26","","","Signal Processing Systems","","",""
"uuid:4cb782fd-7700-458a-a2f5-93a6e273a9bc","http://resolver.tudelft.nl/uuid:4cb782fd-7700-458a-a2f5-93a6e273a9bc","Orchestrating Energy-Efficient vRANs: Bayesian Learning and Experimental Results","Ayala-Romero, Jose A. (Trinity College Dublin); Garcia-Saavedra, Andres (NEC Laboratories Europe); Costa-Perez, Xavier (NEC Laboratories Europe); Iosifidis, G. (TU Delft Embedded Systems)","","2022","Virtualized base stations (vBS) can be implemented in diverse commodity platforms and are expected to bring unprecedented operational flexibility and cost efficiency to the next generation of cellular networks. However, their widespread adoption is hampered by their complex configuration options that affect in a non-traditional fashion both their performance and their power consumption. Following an in-depth experimental analysis in a bespoke testbed, we characterize the vBS power consumption profile and reveal previously unknown couplings between their various control knobs. Motivated by these findings, we develop a Bayesian learning framework for the orchestration of vBSs and design two novel algorithms: (i) BP-vRAN, which employs online learning to balance the vBS performance and energy consumption, and (ii) SBP-vRAN, which augments our optimization approach with safe controls that maximize performance while respecting hard power constraints. We show that our approaches are data-efficient, i.e., converge an order of magnitude faster than state-of-the-art Deep Reinforcement Learning methods, and achieve optimal performance. We demonstrate the efficacy of these solutions in an experimental prototype using real traffic traces.","Bayesian Learning; Gaussian Processes; Online Learning; Radio Access Networks; Energy efficiency; Green networks; Network Virtualization; Wireless Testbeds","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-05-01","","","Embedded Systems","","",""
"uuid:cb337d48-3cb9-47e4-8d1a-28d2db2d1d54","http://resolver.tudelft.nl/uuid:cb337d48-3cb9-47e4-8d1a-28d2db2d1d54","Deep Neural Network-Based Digital Pre-distortion for High Baudrate Optical Coherent Transmission","Bajaj, V. (TU Delft Team Sander Wahls; Nokia Bell Labs); Buchali, Fred (Nokia Solutions and Networks); Chagnon, Mathieu (Nokia Bell Labs); Wahls, S. (TU Delft Team Sander Wahls); Aref, Vahid (Nokia Solutions and Networks)","","2022","High-symbol-rate coherent optical transceivers suffer more from the critical responses of transceiver components at high frequency, especially when applying a higher order modulation format. Recently, we proposed in [20] a neural network (NN)-based digital pre-distortion (DPD) technique trained to mitigate the transceiver response of a 128~GBaud optical coherent transmission system. In this paper, we further detail this work and assess the NN-based DPD by training it using either a direct learning architecture (DLA) or an indirect learning architecture (ILA), and compare performance against a Volterra series-based DPD and a linear DPD. Furthermore, we willfully increase the transmitter nonlinearity and compare the performance of the three DPDs considered. The proposed NN-based DPD trained using DLA performs the best among the three contenders, providing more than 1~dB signal-to-noise ratio (SNR) gains for uniform 64-quadrature amplitude modulation (QAM) and PCS-256-QAM signals at the output of a conventional coherent receiver DSP. Finally, the NN-based DPD enables achieving a record 1.61~Tb/s net rate transmission on a single channel after 80~km of standard single mode fiber (SSMF).","Artificial neural networks; digital pre-distortion; digital signal processing; machine learning and optical fiber communication; Nonlinear optics; Optical amplifiers; Optical fiber amplifiers; Optical fibers; Optical modulation; Optical transmitters","en","journal article","","","","","","","","","","","Team Sander Wahls","","",""
"uuid:66d4041f-5b78-4006-b9fa-ae18ad9ae01c","http://resolver.tudelft.nl/uuid:66d4041f-5b78-4006-b9fa-ae18ad9ae01c","InFocus: A spatial coding technique to mitigate misfocus in near-field LoS beamforming","Myers, N.J. (TU Delft Team Nitin Myers); Heath, Robert W. (University of North Carolina)","","2022","Phased arrays, commonly used in IEEE 802.11ad and 5G radios, are capable of focusing radio frequency signals in a specific direction or a spatial region. Beamforming achieves such directional or spatial concentration of signals and enables phased array-based radios to achieve high data rates. Designing beams for millimeter wave and terahertz communication using massive phased arrays, however, is challenging due to hardware constraints and the wide bandwidth in these systems. For example, beams which are optimal at the center frequency may perform poor in wideband communication systems where the radio frequencies differ substantially from the center frequency. The poor performance in such systems is due to differences in the optimal beamformers corresponding to distinct radio frequencies within the wide bandwidth. Such a mismatch leads to a misfocus effect in near-field systems and the beam squint effect in far-field systems. In this paper, we investigate the misfocus effect and propose InFocus, a low complexity technique to construct beams that are well suited for massive wideband phased arrays. The beams are constructed using a carefully designed frequency modulated waveform in the spatial dimension. InFocus mitigates beam misfocus and beam squint when applied to near-field and far-field systems.","Antenna arrays; Array signal processing; Arrays; Bandwidth; Beam squint; Misfocus; Mm-wave; Near-field communication; Phased arrays; spatial FMCW; Standards; Terahertz; Wireless communication","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-03-13","","","Team Nitin Myers","","",""
"uuid:d94cf0a0-5366-4cd3-9dda-b20bc53445a0","http://resolver.tudelft.nl/uuid:d94cf0a0-5366-4cd3-9dda-b20bc53445a0","Using STLs for Effective In-field Test of GPUs","Condia, Josie E.Rodriguez (Politecnico di Torino); Augusto da Silva, F. (TU Delft Computer Engineering; Cadence Design Systems); Bagbaba, Ahmet Cagri (Cadence Design Systems; Tallinn University of Technology); Guerrero-Balaguera, Juan-David (Politecnico di Torino); Hamdioui, S. (TU Delft Quantum & Computer Engineering); Sauer, Christian (Cadence Design Systems); Sonza Reorda, Matteo (Politecnico di Torino)","","2022","Editor's notes: GPUs have seen an increased adoption in autonomous systems. This article assesses the fault coverage that can be attained through software self-test strategies for in-field test of GPUs. - Nicola Nicolici, McMaster University","Software-based Self-test (SBST); Graphics Processing Units (GPUs); Functional-safety; Reliability","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-10-05","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:9e80e8d6-186d-4260-b56b-7400f35e258a","http://resolver.tudelft.nl/uuid:9e80e8d6-186d-4260-b56b-7400f35e258a","Rapid Design-Space Exploration for Low-Power Manycores under Process Variation utilizing Machine Learning","Majzoub, Sohaib (University of Sharjah); Saleh, Resve A. (University of British Columbia); Taouil, M. (TU Delft Computer Engineering); Hamdioui, S. (TU Delft Quantum & Computer Engineering); Bamakhrama, Mohamed (Synopsys)","","2022","Design-space exploration for low-power manycore design is a daunting and time-consuming task which requires some complex tools and frameworks to achieve. In the presence of process variation, the problem becomes even more challenging, especially the time associated with trial-and-error selection of the proper options in the tools to obtain the optimal power dissipation. The key contribution of this work is the novel use of machine learning to speed up the design process by embedding the tool expertise needed for low power design-space exploration for manycores into a trained neural network. To enable this, we first generate a large volume of data for 36000 benchmark applications by running them under all possible configurations to find the optimal one in terms of power. This is done using our own tool called LVSiM, a holistic manycore optimization program including process variations. A neural network is trained with this information to build in the expertise. A second contribution of this work is to define a new set of features, relevant to power and performance optimization, when training the neural network. At design time, the trained neural network is used to select the proper options on behalf of the user based on the features of any new application. However, one problem encountered with this approach is that the database constructed for machine learning has many outliers due to randomness associated with process variation which creates a major headache for classification - the supervised learning task performed by neural networks. The third key contribution of this work is a novel data coercion algorithm used as a corrective measure to handle the outliers. The proposed data coercion scheme produces results that are within 3.9% of the optimal power consumption compared to 7% without data coercion. Furthermore, the proposed method is about an order of magnitude faster than a heuristic approach and two orders of magnitude faster than a brute-force approach for design-space exploration.","Neural Network; Simulator; manycore; low-power; process variation; frequency scaling; voltage scaling; 3D-Stack; voltage selection; within-die variation","en","journal article","","","","","","","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:04735cdc-2d34-4ad4-81ea-c5492f9cb694","http://resolver.tudelft.nl/uuid:04735cdc-2d34-4ad4-81ea-c5492f9cb694","Learning Time-Varying Graphs from Online Data","Natali, A. (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing); Coutino, Mario (TNO); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2022","This work proposes an algorithmic framework to learn time-varying graphs from online data. The generality offered by the framework renders it model-independent, i.e., it can be theoretically analyzed in its abstract formulation and then instantiated under a variety of model-dependent graph learning problems. This is possible by phrasing (time-varying) graph learning as a composite optimization problem, where different functions regulate different desiderata, e.g., data fidelity, sparsity or smoothness. Instrumental for the findings is recognizing that the dependence of the majority (if not all) data-driven graph learning algorithms on the data is exerted through the empirical covariance matrix, representing a sufficient statistic for the estimation problem. Its user-defined recursive update enables the framework to work in non-stationary environments, while iterative algorithms building on novel time-varying optimization tools explicitly take into account the temporal dynamics, speeding up convergence and implicitly including a temporal-regularization of the solution. We specialize the framework to three well-known graph learning models, namely, the Gaussian graphical model (GGM), the structural equation model (SEM), and the smoothness-based model (SBM), where we also introduce ad-hoc vectorization schemes for structured matrices (symmetric, hollows, etc.) which are crucial to perform correct gradient computations, other than enabling to work in low-dimensional vector spaces and hence easing storage requirements. After discussing the theoretical guarantees of the proposed framework, we corroborate it with extensive numerical tests in synthetic and real data.","Graph topology identification; dynamic graph learning; network topology inference; graph signal processing","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:705557e5-9223-4705-8b09-622f1017e6c3","http://resolver.tudelft.nl/uuid:705557e5-9223-4705-8b09-622f1017e6c3","Towards automatic reconstruction of 3D city models tailored for urban flow simulations","Pađen, I. (TU Delft Urban Data Science); Garcia Sanchez, C. (TU Delft Urban Data Science); Ledoux, H. (TU Delft Urban Data Science)","","2022","In the computational fluid dynamics simulation workflow, the geometry preparation step is often regarded as a tedious, time-consuming task. Many practitioners consider it one of the main bottlenecks in the simulation process. The more complex the geometry, the longer the necessary work, meaning this issue is amplified for urban flow simulations that cover large areas with complex building geometries. To address the issue of geometry preparation, we propose a workflow for automatically reconstructing simulation-ready 3D city models. The workflow combines 2D geographical datasets (e.g., cadastral data, topographic datasets) and aerial point cloud-based elevation data to reconstruct terrain, buildings, and imprint surface layers like water, low vegetation, and roads. Imprinted surface layers serve as different roughness surfaces for modeling the atmospheric boundary layer. Furthermore, the workflow is capable of automatically defining the influence region and domain size according to best practice guidelines. The resulting geometry aims to be error-free: without gaps, self-intersections, and non-manifold edges. The workflow was implemented into an open-source framework using modern, robust, and state-of-the-art libraries with the intent to be used for further developments. Our approach limits the geometry generation step to the order of hours (including input data retrieval and preparation), producing geometries that can be directly used for computational grid generation without additional preparation. The reconstruction done by the algorithm can last from a few seconds to a few minutes, depending on the size of the input data. We obtained and prepared the input data for our verification study in about 2 hours, while the reconstruction process lasted 1 minute. The unstructured computational meshes we created in an automatic mesh generator show satisfactory quality indicators and the subsequent numerical simulation exhibits good convergence behavior with the grid convergence index of observed variables less than 5%","automatic city reconstruction; geometry preparation; pre-processing; computational fluid dynamics; semantic surfaces; 3D city modeling","en","journal article","","","","","","","","","","","Urban Data Science","","",""
"uuid:30fc8b53-6926-433f-bbb6-821400a7c5eb","http://resolver.tudelft.nl/uuid:30fc8b53-6926-433f-bbb6-821400a7c5eb","Quantifying households’ carbon footprint in cities using socioeconomic attributes: A case study for The Hague (Netherlands)","Patel, R.G. (Student TU Delft); Marvuglia, Antonino (Luxembourg Institute of Science and Technology); Baustert, Paul (Luxembourg Institute of Science and Technology); Huang, Yilin (TU Delft System Engineering); Shivakumar, Abhishek (United Nations); Nikolic, I. (TU Delft System Engineering); Verma, T. (TU Delft Policy Analysis)","","2022","Cities consume almost 80 percent of world’s energy and account for 60 percent of all the emissions of carbon dioxide and significant amounts of other greenhouse gases (GHG). The ongoing rapid urbanization will further increase GHG emissions of cities. The quantification of the environmental impact generated in cities is an important step to curb the impact. In fact, quantifying the consumption activities taking place inside a city, if differentiated by socioeconomic and demographic groups, can provide important insights for sustainable-consumption policies. However, the lack of high-resolution data related to these activities makes it difficult to quantify urban GHG emissions (as well as other impacts). This paper presents a methodology that can quantify the carbon footprint of households in cities using consumption data from a national or European level, where the resource consumption is linked to socioeconomic attributes of a population. The methodology is applied to analyzing the environmental impact by household resource consumption in the city of The Hague in the Netherlands. The key insights reveal potential intervention areas regarding resource consumption categories and demographic groups that can be targeted to reduce GHG emissions due to consumption-driven activities in the city.","Consumption-driven emissions; Process-based LCA; Cities; Urban policies; Random forest; Demographic clustering","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-01-27","","","System Engineering","","",""
"uuid:82741bc2-563a-426d-a43f-9e802ace3781","http://resolver.tudelft.nl/uuid:82741bc2-563a-426d-a43f-9e802ace3781","Delaunay Painting: Perceptual Image Colouring from Raster Contours with Gaps","Parakkat, A.D. (TU Delft Computer Graphics and Visualisation; Telecom Paris Tech); Memari, Pooran (Institut Polytechnique de Paris); Cani, Marie Paule (Institut Polytechnique de Paris)","","2022","We introduce Delaunay Painting, a novel and easy-to-use method to flat-colour contour-sketches with gaps. Starting from a Delaunay triangulation of the input contours, triangles are iteratively filled with the appropriate colours, thanks to the dynamic update of flow values calculated from colour hints. Aesthetic finish is then achieved, through energy minimisation of contour-curves and further heuristics enforcing the appropriate sharp corners. To be more efficient, the user can also make use of our colour diffusion framework, which automatically extends colouring to small, internal regions such as those delimited by hatches. The resulting method robustly handles input contours with strong gaps. As an interactive tool, it minimizes user's efforts and enables any colouring strategy, as the result does not depend on the order of interactions. We also provide an automatized version of the colouring strategy for quick segmentation of contours images, that we illustrate with applications to medical imaging and sketch segmentation.","assistive interfaces; computational geometry; image processing; interaction; modelling; shape completion; sketch coloring","en","journal article","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:df03d38c-897f-4cac-b32d-e8a290cad62a","http://resolver.tudelft.nl/uuid:df03d38c-897f-4cac-b32d-e8a290cad62a","Demeter: A Fast and Energy-Efficient Food Profiler Using Hyperdimensional Computing in Memory","Shahroodi, T. (TU Delft Computer Engineering); Zahedi, M.Z. (TU Delft Computer Engineering); Firtina, Can (ETH Zürich); Alser, Mohammed (ETH Zürich); Wong, J.S.S.M. (TU Delft Computer Engineering); Mutlu, Onur (ETH Zürich); Hamdioui, S. (TU Delft Quantum & Computer Engineering)","","2022","Food profiling is an essential step in any food monitoring system needed to prevent health risks and potential frauds in the food industry. Significant improvements in sequencing technologies are pushing food profiling to become the main computational bottleneck. State-of-the-art profilers are unfortunately too costly for food profiling. Our goal is to design a food profiler that solves the main limitations of existing profilers, namely (1) working on massive data structures and (2) incurring considerable data movement, for a real-time monitoring system. To this end, we propose Demeter, the first platform-independent framework for food profiling. Demeter overcomes the first limitation through the use of hyperdimensional computing (HDC) and efficiently performs the accurate few-species classification required in food profiling. We overcome the second limitation by the use of an in-memory hardware accelerator for Demeter (named Acc-Demeter) based on memristor devices. Acc-Demeter actualizes several domain-specific optimizations and exploits the inherent characteristics of memristors to improve the overall performance and energy consumption of Acc-Demeter. We compare Demeter’s accuracy with other industrial food profilers using detailed software modeling. We synthesize Acc-Demeter’s required hardware using UMC’s 65nm library by considering an accurate PCM model based on silicon-based prototypes. Our evaluations demonstrate that Acc-Demeter achieves a (1) throughput improvement of 192× and 724× and (2) memory reduction of 36× and 33× compared to Kraken2 and MetaCache (2 state-of-the-art profilers), respectively, on typical food-related databases. Demeter maintains an acceptable profiling accuracy (within 2% of existing tools) and incurs a very low area overhead.","Food profiling; emerging memories; in memory processing; analog computing","en","journal article","","","","","","","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:f86c5075-bdf4-4b89-aab7-44c09cc5517b","http://resolver.tudelft.nl/uuid:f86c5075-bdf4-4b89-aab7-44c09cc5517b","Exoplanet cartography using convolutional neural networks","Meinke, K. (Student TU Delft); Stam, D.M. (TU Delft Astrodynamics & Space Missions); Visser, P.M. (TU Delft Mathematical Physics)","","2022","Context. In the near future, dedicated telescopes will observe Earth-like exoplanets in reflected parent starlight, allowing their physical characterization. Because of the huge distances, every exoplanet will remain an unresolved, single pixel, but temporal variations in the pixel’s spectral flux contain information about the planet’s surface and atmosphere.
Aims. We tested convolutional neural networks for retrieving a planet’s rotation axis, surface, and cloud map from simulated single-pixel observations of flux and polarization light curves. We investigated the influence of assuming that the reflection by the planets is Lambertian in the retrieval while in reality their reflection is bidirectional, and the influence of including polarization.
Methods. We simulated observations along a planet’s orbit using a radiative transfer algorithm that includes polarization and bidirectional reflection by vegetation, deserts, oceans, water clouds, and Rayleigh scattering in six spectral bands from 400 to 800 nm, at various levels of photon noise. The surface types and cloud patterns of the facets covering a model planet are based on probability distributions. Our networks were trained with simulated observations of millions of planets before retrieving maps of test planets.
Results. The neural networks can constrain rotation axes with a mean squared error (MSE) as small as 0.0097, depending on the orbital inclination. On a bidirectionally reflecting planet, 92% of ocean facets and 85% of vegetation, deserts, and cloud facets are correctly retrieved, in the absence of noise. With realistic amounts of noise, it should still be possible to retrieve the main map features with a dedicated telescope. Except for face-on orbits, a network trained with Lambertian reflecting planets yields significant retrieval errors when given observations of bidirectionally reflecting planets, in particular, brightness artifacts around a planet’s pole. Including polarization improves the retrieval of the rotation axis and the accuracy of the retrieval of ocean and cloudy map facets.","planets and satellites: surfaces; planets and satellites: oceans; planets and satellites: atmospheres; techniques: photometric; techniques: polarimetric / techniques: image processing","en","journal article","","","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:5ad8312a-e7df-410a-bb0e-cd1e97856d3c","http://resolver.tudelft.nl/uuid:5ad8312a-e7df-410a-bb0e-cd1e97856d3c","A Two-Stage Bayesian optimisation for Automatic Tuning of an Unscented Kalman Filter for Vehicle Sideslip Angle Estimation","Bertipaglia, A. (TU Delft Intelligent Vehicles); Shyrokau, B. (TU Delft Intelligent Vehicles); Alirezaei, Mohsen (Eindhoven University of Technology); Happee, R. (TU Delft Intelligent Vehicles)","","2022","This paper presents a novel methodology to auto-tune an Unscented Kalman Filter (UKF). It involves using a Two-Stage Bayesian Optimisation (TSBO), based on a t-Student Process to optimise the process noise parameters of a UKF for vehicle sideslip angle estimation. Our method minimises performance metrics, given by the average sum of the states’ and measurement’ estimation error for various vehicle manoeuvres covering a wide range of vehicle behaviour. The predefined cost function is minimised through a TSBO which aims to find a location in the feasible region that maximises the probability of improving the current best solution. Results on an experimental dataset show the capability to tune the UKF in 79.9% less time than using a genetic algorithm (GA) and the overall capacity to improve the estimation performance in an experimental test dataset of 9.9% to the current state-of-the-art GA.","Training; Intelligent vehicles; Measurement uncertainty; Gaussian processes; Cost function; Bayes methods; Kalman filters","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-01-19","","","Intelligent Vehicles","","",""
"uuid:2c66be20-9343-4b2c-8649-e842ae0391d7","http://resolver.tudelft.nl/uuid:2c66be20-9343-4b2c-8649-e842ae0391d7","Energy-efficient In-Memory Address Calculation","Yousefzadeh, Amirreza (Stichting IMEC Nederland); Stuijt, Jan (Stichting IMEC Nederland); Hijdra, Martijn (Stichting IMEC Nederland); Liu, Hsiao-Hsuan (IMEC); Gebregiorgis, A.B. (TU Delft Computer Engineering); Singh, A. (TU Delft Computer Engineering); Hamdioui, S. (TU Delft Quantum & Computer Engineering); Catthoor, Francky (IMEC)","","2022","Computation-in-Memory (CIM) is an emerging computing paradigm to address memory bottleneck challenges in computer architecture. A CIM unit cannot fully replace a general-purpose processor. Still, it significantly reduces the amount of data transfer between a traditional memory unit and the processor by enriching the transferred information. Data transactions between processor and memory consist of memory access addresses and values. While the main focus in the field of in-memory computing is to apply computations on the content of the memory (values), the importance of CPU-CIM address transactions and calculations for generating the sequence of access addresses for data-dominated applications is generally overlooked. However, the amount of information transactions used for ""address""can easily be even more than half of the total transferred bits in many applications. In this article, we propose a circuit to perform the in-memory Address Calculation Accelerator. Our simulation results showed that calculating address sequences inside the memory (instead of the CPU) can significantly reduce the CPU-CIM address transactions and therefore contribute to considerable energy saving, latency, and bus traffic. For a chosen application of guided image filtering, in-memory address calculation results in almost two orders of magnitude reduction in address transactions over the memory bus.","Hardware; Semiconductor memory; Memory and dense storage; Power estimation and optimization; Emerging architectures; In-memory processing; address calculation unit; energy optimization","en","journal article","","","","","","","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:025e2de8-c583-4416-8a3a-0ae89c5a85c9","http://resolver.tudelft.nl/uuid:025e2de8-c583-4416-8a3a-0ae89c5a85c9","Thermodynamic analysis of a novel integrated biomass pyrolysis-solid oxide fuel cells-combined heat and power system for co-generation of biochar and power","Kuo, P.C. (TU Delft Energy Technology; University of Tokyo); Illathu Kandy, B. (TU Delft Energy Technology; Indian Institute of Technology (IIT)); Özdemir, F. (TU Delft Energy Technology); Woudstra, T. (TU Delft Process and Energy); Aravind, P.V. (TU Delft Energy Technology; Rijksuniversiteit Groningen)","","2022","Biochar derived from pyrolysis or gasification has been gaining significant attention in the recent years due to its potential wide applications for the development of negative emissions technologies. A new concept was developed for biochar and power co-generation system using a combination of biomass pyrolysis (BP) unit, solid oxide fuel cells (SOFCs), and a combined heat and power (CHP) system. A set of detailed experimental data of pyrolysis product yields was established in Aspen Plus to model the BP process. The impacts of various operating parameters including current density (j), fuel utilization factor (Uf), pyrolysis gas reforming temperature (Treformer), and biochar split ratio (Rbiochar) on the SOFC and overall system performances in terms of energy and exergy analyses were evaluated. The simulation results indicated that increasing the Uf, Treformer, and Rbiochar can favorably improve the performances of the BP-SOFC-CHP system. As a whole, the overall electrical, energy and exergy efficiencies of the BP-SOFC-CHP system were in the range of 8–14%, 76–78%, and 71–74%, respectively. From the viewpoint of energy balance, burning the reformed pyrolysis gas can supply enough energy demand for the process to achieve a stand-alone BP-SOFC-CHP plant. In case of a stand-alone system, the overall electrical, energy and exergy efficiencies were 5.4, 63.9 and 57.8%, respectively, with a biochar yield of 31.6%.","biochar; SOFC; biomass pyrolysis; process integration; thermodynamic analysis; negative emissions technologies","en","journal article","","","","","","","","","","Process and Energy","Energy Technology","","",""
"uuid:9369e2a3-b65e-48e9-b300-bbaacdd7e09c","http://resolver.tudelft.nl/uuid:9369e2a3-b65e-48e9-b300-bbaacdd7e09c","A New Baseline for Feature Description on Multimodal Imaging of Paintings","van der Toorn, J. (Student TU Delft); Wiersma, R.T. (TU Delft Computer Graphics and Visualisation); Vandivere, Abbie (Royal Picture Gallery Mauritshuis); Marroquim, Ricardo (TU Delft Computer Graphics and Visualisation); Eisemann, E. (TU Delft Computer Graphics and Visualisation)","","2022","Multimodal imaging is used by conservators and scientists to study the composition of paintings. To aid the combined analysis of these digitisations, such images must first be aligned. Rather than proposing a new domain-specific descriptor, we explore and evaluate how existing feature descriptors from related fields can improve the performance of feature-based painting digitisation registration. We benchmark these descriptors on pixel-precise, manually aligned digitisations of ''Girl with a Pearl Earring'' by Johannes Vermeer (c. 1665, Mauritshuis) and of ''18th-Century Portrait of a Woman''. As a baseline we compare against the well-established classical SIFT descriptor. We consider two recent descriptors: the handcrafted multimodal MFD descriptor, and the learned unimodal SuperPoint descriptor. Experiments show that SuperPoint starkly increases description matching accuracy by 40% for modalities with little modality-specific artefacts. Further, performing craquelure segmentation and using the MFD descriptor results in significant description matching accuracy improvements for modalities with many modalityspecific artefacts.","Image registration; Cultural Heritage; Technical Imaging; Image Processing","en","conference paper","The Eurographics Association","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:b64f6296-e747-47e9-81e3-622db0ebc781","http://resolver.tudelft.nl/uuid:b64f6296-e747-47e9-81e3-622db0ebc781","Optimizing deep reinforcement learning policies for deteriorating systems considering ordered action structuring and value of information","Andriotis, C. (TU Delft Structural Design & Mechanics); Papakonstantinou, K.G. (Pennsylvania State University)","Li, J. (editor); Spanos, Pol D. (editor); Chen, J.B. (editor); Peng, Y.B. (editor)","2022","Inspection and maintenance (I&M) optimization entails many sources of computational complexity, among others, due to high-dimensional decision and state variables in multi-component systems, long planning horizons, stochasticity of objectives and constraints, and inherent uncertainties in measurements and models. This paper studies how the above can be addressed within the context of constrained Partially Observable Markov Decision Processes (POMDPs) and Deep Reinforcement Learning (DRL) in a unified fashion. Special emphasis is paid on how ordered action structuring of I&M actions can be exploited to decompose the respective policy parametrizations in actor-critic DRL schemes, resulting into fully decoupled maintenance and inspection actors. It is shown that the Value of Information (VoI) is naturally utilized in such POMDP control frameworks, as directly associated with the DRL advantage functions that emerge in the gradient computations of the inspection policy parameters. Overall, the presented approach, following the natural flow of engineering decisions, results in new architectural configurations for policy networks, facilitating more efficient training, while alleviating further the dimensionality burdens related to combinatorial definitions of I&M actions. The efficiency of the methodology is demonstrated in numerical experiments of a structural system subject to corrosion, where the optimization problem is formulated to concurrently account for state and model uncertainties as well as long-term probability of failure exceedance constraints. Results showcase that the obtained DRL policies considerably outperform standard decision rules.","inspection & maintenance; deep reinforcement learning; partially observable Markov decision processes; value of information; stochastic constraints; decision theory","en","conference paper","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Structural Design & Mechanics","","",""
"uuid:3820dc1b-00e7-4625-80bd-0f94566df993","http://resolver.tudelft.nl/uuid:3820dc1b-00e7-4625-80bd-0f94566df993","Ergonomic factors affecting comprehension levels of traffic signs: A critical review","Berrio, Shyrle (Pontificia Universidad Javeriana); Barrero, Lope H. (Pontificia Universidad Javeriana); Zambrano, Laura (Pontificia Universidad Javeriana); Papadimitriou, E. (TU Delft Safety and Security Science)","","2022","Comprehension of traffic signs is important to road safety. This review aims to study the extent to which road users in different countries comprehend traffic signs and to identify which ergonomic principles in traffic sign design can affect the levels of comprehension. We conducted an extensive literature review dealing with comprehension of public traffic signs directed at any road user. We searched Journal articles indexed by Scopus, ScienceDirect, and Web of Science. The search identified 35 articles that assessed the comprehension of 931 traffic signs in 26 countries, including six studies that tested the comprehension of new versus existing traffic signs. Various methods have been implemented to measure traffic signs’ comprehension levels and assess traffic sign design's conformity to different ergonomic principles. Results indicate high variability in the comprehension levels of signs, e.g., signs such as “Road works” and “No U-turn” are highly comprehended (comprehension levels over 90 %), while other signs like “termination of road” are rarely comprehended by road users. Regarding the acceptable comprehension levels, 23.1 % of the assessed traffic signs achieved levels above 85 %; and 53.3 % of signs have comprehension levels lower than 67 %. On the other hand, twenty-four studies evaluated how traffic signs comply with ergonomic design principles. Incorporating ergonomic principles into the design of traffic signs can improve comprehension levels. However, apart from the familiarity, there is uncertainty about the ergonomic principles that could maximize the comprehension of traffic signs. Efforts should be made to ensure that different populations of road users sufficiently comprehend traffic signs.","Comprehension process; Ergonomic principles; Infrastructure; Road safety; Sign design","en","review","","","","","","","","","","","Safety and Security Science","","",""
"uuid:e327124a-5c5a-4b75-8793-f9866289ec66","http://resolver.tudelft.nl/uuid:e327124a-5c5a-4b75-8793-f9866289ec66","A Framework for Co‐Design Processes and Visual Collaborative Methods: An Action Research Through Design in Chile","Gaete Cruz, M. (TU Delft Urban Development Management); Ersoy, A. (TU Delft Urban Development Management); Czischke, D.K. (TU Delft Real Estate Management); van Bueren, Ellen (TU Delft Management in the Built Environment)","","2022","With the urgency to adapt cities to social and ecological pressures, co-design has become essential to legitimise transformations by involving citizens and other stakeholders in their design processes. Public spaces remain at the heart of this transformation due to their accessibility for citizens and capacity to accommodate urban functions. However, urban landscape design is a complex task for people who are not used to it. Visual collaborative methods (VCMs) are often used to facilitate expression and ideation early in design, offering an arts-based language in which actors can communicate. We developed a co-design process framework to analyse how VCMs contribute to collaboration in urban processes throughout the three commonly distinguished design phases: conceptual, embodiment, and detail. We participated in a co-design process in the Atacama Desert in Chile, adopting an Action Research through Design (ARtD) in planning, undertaking and reflecting in practice. We found that VCMs are useful to facilitate collaboration throughout the process in design cycles. The variety of VCMs used were able to foster co-design in a rather non-participatory context and influenced the design outcomes. The framework recognized co-design trajectories such as the early fuzziness and the ascendent co-design trajectory throughout the process. The co-design process framework aims for conceptual clarification and may be helpful in planning and undertaking such processes in practice. We conclude that urban co-design should be planned and analysed as a long-term process of interwoven collaborative trajectories.","co‐design; co‐design process; public space; urban co‐design; visual methods","en","journal article","","","","","","","","","","Management in the Built Environment","Urban Development Management","","",""
"uuid:408ac6ca-f0dc-4d79-abbf-618cdea1c78f","http://resolver.tudelft.nl/uuid:408ac6ca-f0dc-4d79-abbf-618cdea1c78f","Bonding Process of Copper Foam-Silver Composite and Performance Characterization of the Joint","Lv, Guoping (Guilin University of Electronic Technology); Yan, Haidong (Zhejiang University); Yan, Haidong (Guilin University of Electronic Technology); Yang, Daoguo (Guilin University of Electronic Technology); Wu, Xinke (Zhejiang University); Sheng, Kuang (Zhejiang University); Liu, Chaohui (National New Energy Vehicle Technology Innovation); Zhang, Yakun (National New Energy Vehicle Technology Innovation); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials)","","2022","As a key heat-dissipating and electrical interconnecting component in high-temperature power modules, die-attach and substrate-attach layers play an important role in effectively reducing the thermal resistance and improving the long-term reliability. Traditional substrate-attach materials limit the high-temperature applications of packaging modules due to their high thermal resistance and high-temperature reliability. To solve the above deficiency, a copper foam-silver composite was proposed in this paper, which was prepared by mixing copper foam solid skeleton with micron silver paste. According to the results of thermogravimetric analysis (TGA) of silver paste, the preheating process was determined and sintered at 270°C and 10MPa. The influence of different preparation technology on the quality of sintered joint was investigated. The morphology characteristics and distribution of sintered silver in the copper foam were observed by scanning electron microscope (SEM). The results show that the sintered silver of group C samples can be uniformly filled into the solid skeleton of copper foam, and the densification degree is high, without cracks, delamination, and holes. The shear strength can reach 55MPa.","large-area bonding; copper foam-Ag composite film; preparation process","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Electronic Components, Technology and Materials","","",""
"uuid:5a8b8819-e376-4971-a3a6-c312a301aadf","http://resolver.tudelft.nl/uuid:5a8b8819-e376-4971-a3a6-c312a301aadf","Revisiting the Partial Power Processing Concept: Case Study of a 5-kW 99.11% Efficient Flyback Converter-Based Battery Charger","Granello, P. (TU Delft DC systems, Energy conversion & Storage); Soeiro, Thiago B. (TU Delft DC systems, Energy conversion & Storage); van der Blij, N.H. (TU Delft DC systems, Energy conversion & Storage); Bauer, P. (TU Delft DC systems, Energy conversion & Storage)","","2022","This article proposes an analytical methodology to evaluate the performance of the main partial power processing (PPP) architectures in terms of the improvements in the system's conversion efficiency. This analysis considers the influence of the system's voltage gain, the auxiliary dc/dc converter's efficiency, and the possibility of bidirectional power flow. Herein, the key PPP architectures are, thus, modeled and benchmarked. The presented results attest to the series configuration as the most efficient PPP circuit solution, with no limits on the system voltage gain, contrary to the generalized results found in today's literature. To assess these results and the significance of the proposed analysis, a well-known, simple, and cost-effective flyback topology has been designed and tested for a series PPP circuit solution able to effectively interface a 5-kW battery energy storage system (BESS) to a 700-V dc grid. A relatively high power conversion efficiency and compact hardware are achieved due to the reduced size requirements on the input and output filtering stages. Above all, while explaining the PPP concept, this study shows that even converter circuits known for their low power efficiency can be used to derive highly efficient systems. A design approach is, thus, provided to facilitate the design of the presented PPP circuit, and measurements are, finally, carried out to compare the obtained results with the expected ones derived from the developed analytical models.","Battery charger; battery energy storage system (BESS); dc-dc power conversion; flyback converter; partial power processing (PPP)","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","DC systems, Energy conversion & Storage","","",""
"uuid:53fd2281-92a7-43f2-a7f4-4ea138dacb02","http://resolver.tudelft.nl/uuid:53fd2281-92a7-43f2-a7f4-4ea138dacb02","Comprehensive Human Oversight Framework to Ensure Accountability over Autonomous Weapon Systems","Verdiesen, E.P. (TU Delft Information and Communication Technology)","","2022","","accountability; autonomous weapon systems; responsibility; value deliberation process","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Information and Communication Technology","","",""
"uuid:39103f30-c111-4bb6-8b46-2434a0cf4d9a","http://resolver.tudelft.nl/uuid:39103f30-c111-4bb6-8b46-2434a0cf4d9a","Automatic Control of Hot Metal Temperature","Hashimoto, Y. (JFE Steel Corporation); Masuda, Ryosuke (JFE Steel Corporation); Mulder, Max (TU Delft Control & Simulation); van Paassen, M.M. (TU Delft Control & Simulation)","","2022","To achieve the automation of blast furnace operation, an automatic control system for hot metal temperature (HMT) was developed. Nonlinear model predictive control (NMPC) which predicts up to ten-hour-ahead HMT and calculates appropriate control actions of pulverized coal rate (PCR) was constructed. Simulation validation showed that the NMPC algorithm generates control actions similar to those by the operators and that HMT can be maintained within ±10 °C of the set point. The automatic control system using NMPC was then implemented in an actual plant. As a result, the developed control system suppressed the effects of disturbances, such as the changes in the coke ratio and blast volume, and successfully reduced the average control error of HMT by 4.6 °C compared to the conventional manual operation. The developed control system has contributed to the reduction of reducing agent rate (RAR) and CO2 emissions.","blast furnace; process control; reducing agent rate; thermal control","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:aab53b12-00eb-4444-ad63-4e9da91e287d","http://resolver.tudelft.nl/uuid:aab53b12-00eb-4444-ad63-4e9da91e287d","Position-Dependent Snap Feedforward: A Gaussian Process Framework","van Haren, Max (Eindhoven University of Technology); Poot, Maurice (Eindhoven University of Technology); Portegies, Jim (Eindhoven University of Technology); Oomen, T.A.E. (TU Delft Team Jan-Willem van Wingerden; Eindhoven University of Technology)","","2022","Mechatronic systems have increasingly high performance requirements for motion control. The low-frequency contribution of the flexible dynamics, i.e., the compliance, should be compensated for by means of snap feedforward to achieve high accuracy. Position-dependent compliance, which often occurs in motion systems, requires the snap feedforward parameter to be modeled as a function of position. Position-dependent compliance is compensated for by using a Gaussian process to model the snap feedforward parameter as a continuous function of position. A simulation of a flexible beam shows that a significant performance increase is achieved when using the Gaussian process snap feedforward parameter to compensate for position-dependent compliance.","Training; Mechatronics; Dynamics; Gaussian processes; Feedforward systems; Motion control; MIMO communication","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-03-05","","","Team Jan-Willem van Wingerden","","",""
"uuid:b6e24e45-8a91-4121-b360-2daea668add4","http://resolver.tudelft.nl/uuid:b6e24e45-8a91-4121-b360-2daea668add4","Performance Analysis of the Wind Field Estimation for a Very Fast Scanning Weather Radar","Dash, T.K. (TU Delft Microwave Sensing, Signals & Systems); Krasnov, O.A. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2022","The performance and limitations of the Doppler processing of the scattered signals from extended meteorological objects (precipitation) are analysed in the case of radar with fast azimuthal scanning. The classical method of the Discrete Fourier Transform (DFT) has been applied to simulated weather radar signals to estimate the Doppler velocity spectrum and characterise it with the mean Doppler velocity and the Doppler spectrum width. The accuracy and resolution of these estimations have been analysed as a function of the scanning radar rotation speed. Finally, the performances of the 2D wind field retrieval are analysed in relation to the accuracy and resolution of Doppler spectra estimations. The wind field retrieval has been done using the classical velocity azimuthal display (VAD) retrieval technique that gives an overall/average estimate of the wind field over an observation region. A few possible approaches for improving the accuracy and resolution of a fast scanning weather radar Doppler signal processing are proposed and analysed based on simulated scanning radar data.","Weather radars; Doppler processing; DFT; VAD","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-04","","","Microwave Sensing, Signals & Systems","","",""
"uuid:867e7f25-6b2c-4769-8018-8f5ad2d2ac38","http://resolver.tudelft.nl/uuid:867e7f25-6b2c-4769-8018-8f5ad2d2ac38","Communication-Efficient Cluster Scalable Genomics Data Processing Using Apache Arrow Flight","Ahmad, T. (TU Delft Computer Engineering); Ma, Chengxin (Student TU Delft); Al-Ars, Z. (TU Delft Computer Engineering); Hofstee, H.P. (TU Delft Computer Engineering)","Gurrola, Javier (editor)","2022","Current cluster scaled genomics data processing solutions rely on big data frameworks like Apache Spark, Hadoop and HDFS for data scheduling, processing and storage. These frameworks come with additional computation and memory overheads by default. It has been observed that scaling genomics dataset processing beyond 32 nodes is not efficient on such frameworks.To overcome the inefficiencies of big data frameworks for processing genomics data on clusters, we introduce a low-overhead and highly scalable solution on a SLURM based HPC batch system. This solution uses Apache Arrow as in-memory columnar data format to store genomics data efficiently and Arrow Flight as a network protocol to move and schedule this data across the HPC nodes with low communication overhead.As a use case, we use NGS short reads DNA sequencing data for pre-processing and variant calling applications. This solution outperforms existing Apache Spark based big data solutions in term of both computation time (2x) and lower communication overhead (more than 20-60% depending on cluster size). Our solution has similar performance to MPI-based HPC solutions, with the added advantage of easy programmability and transparent big data scalability. The whole solution is Python and shell script based, which makes it flexible to update and integrate alternative variant callers. Our solution is publicly available on GitHub at https://github.com/abs-tudelft/time-to-fly-high/tree/main/genomics","Genomics; Whole Genome/Exome Sequencing; Big Data; Apache Arrow; In-Memory; Plasma Object Store; Parallel Processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Computer Engineering","","",""
"uuid:361ceb9f-3765-4f9d-a474-ae89ba1aa70e","http://resolver.tudelft.nl/uuid:361ceb9f-3765-4f9d-a474-ae89ba1aa70e","On the Evaluation of NLP-based Models for Software Engineering","Izadi, M. (TU Delft Software Engineering); Ahmadabadi, Martin Nili (University of Tehran)","","2022","NLP-based models have been increasingly incorporated to address SE problems. These models are either employed in the SE domain with little to no change, or they are greatly tailored to source code and its unique characteristics. Many of these approaches are considered to be outperforming or complementing existing solutions. However, an important question arises here: Are these models evaluated fairly and consistently in the SE community?. To answer this question, we reviewed how NLP-based models for SE problems are being evaluated by researchers. The findings indicate that currently there is no consistent and widely-accepted protocol for the evaluation of these models. While different aspects of the same task are being assessed in different studies, metrics are defined based on custom choices, rather than a system, and finally, answers are collected and interpreted case by case. Consequently, there is a dire need to provide a methodological way of evaluating NLP-based models to have a consistent assessment and preserve the possibility of fair and efficient comparison.","Evaluation; Natural Language Processing; Software Engineering","en","conference paper","IEEE","","","","","","","","","","Software Engineering","","",""
"uuid:a2d52aca-e1c5-4fd2-beee-10a2fd7656e9","http://resolver.tudelft.nl/uuid:a2d52aca-e1c5-4fd2-beee-10a2fd7656e9","CatIss: An Intelligent Tool for Categorizing Issues Reports using Transformers","Izadi, M. (TU Delft Software Engineering)","","2022","Users use Issue Tracking Systems to keep track and manage issue reports in their repositories. An issue is a rich source of software information that contains different reports including a problem, a request for new features, or merely a question about the software product. As the number of these issues increases, it becomes harder to manage them manually. Thus, automatic approaches are proposed to help facilitate the management of issue reports. This paper describes CatIss, an automatic Categorizer of Issue reports which is built upon the Transformer-based pre-trained RoBERTa model. CatIss classifies issue reports into three main categories of Bug report, Enhancement/feature request, and Question. First, the datasets provided for the NLBSE tool competition are cleaned and preprocessed. Then, the pre-trained RoBERTa model is fine-tuned on the preprocessed dataset. Evaluating CatIss on about 80 thousand issue reports from GitHub, indicates that it performs very well surpassing the competition baseline, TicketTagger, and achieving 87.2% F1-score (micro average). Additionally, as CatIss is trained on a wide set of repositories, it is a generic prediction model, hence applicable for any unseen software project or projects with little historical data. Scripts for cleaning the datasets, training CatIss and evaluating the model are publicly available.","Issue report Management; Classification, Repositories; Transformers; Machine Learning; Natural Language Processing","en","conference paper","IEEE","","","","","","","","","","Software Engineering","","",""
"uuid:7db8ff71-442c-4eae-a532-c0d41e1901d0","http://resolver.tudelft.nl/uuid:7db8ff71-442c-4eae-a532-c0d41e1901d0","Task-Aware Connectivity Learning for Incoming Nodes Over Growing Graphs","Das, B. (TU Delft Multimedia Computing); Hanjalic, A. (TU Delft Intelligent Systems); Isufi, E. (TU Delft Multimedia Computing)","","2022","Data processing over graphs is usually done on graphs of fixed size. However, graphs often grow with new nodes arriving over time. Knowing the connectivity information of these nodes, and thus, the expanded graph is crucial for processing data over the expanded graph. In its absence, its inference and the subsequent data processing become essential. This paper provides contributions along this direction by considering task-driven data processing for incoming nodes without connectivity information. We model the incoming node attachment as a random process dictated by the parameterized vectors of probabilities and weights of attachment. The attachment is driven by the existing graph topology, the corresponding graph signal, and an associated processing task. We consider two such tasks, one of interpolation at the incoming node, and that of graph signal smoothness. We show that the model bounds implicitly the spectral perturbation between the nominal topology of the expanded graph and the drawn realizations. In the absence of connectivity information our topology, task, and data-aware stochastic attachment performs better than purely data-driven and topology driven stochastic attachment rules, as is confirmed by numerical results over synthetic and real data.","Graph signal interpolation; graph signal processing; graph smoothness; graph topology identification; incoming nodes; Interpolation; Network topology; Numerical models; Perturbation methods; spectral perturbation; Stochastic processes; Task analysis; Topology","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","Intelligent Systems","Multimedia Computing","","",""
"uuid:b38264d8-b3f6-450e-9693-61271caa8628","http://resolver.tudelft.nl/uuid:b38264d8-b3f6-450e-9693-61271caa8628","A preliminary investigation of the potential benefits of using the ASTRA Bridge for short-span bridge deck refurbishment projects in Switzerland","Zumstein, Marco (ETH Zürich); Chen, Qian (ETH Zürich; University of British Columbia); Adey, Bryan T. (ETH Zürich); Hall, Daniel M. (TU Delft Design & Construction Management; ETH Zürich)","","2022","How bridge refurbishment projects are performed requires a trade-off between the speed and cost of the project and the amount of traffic disturbances during the project. A possible way to help reach a better balance between these two extremes is the ASTRA Bridge developed in Switzerland. The ASTRA Bridge is a 236-meter long steel ramp system on wheels, which is placed on top of the bridge deck undergoing refurbishment to enable vehicles to continue to pass over the bridge while construction work progresses underneath. This study illustrates new refurbishment processes by using the ASTRA Bridge and presents the first quantitative analysis of the effects of using the ASTRA Bridge on the time, costs and traffic disturbances associated with bridge refurbishment. The bridge investigated is a short-span (50 m long) highway bridge requiring refurbishment of its superstructure. The analysis indicates that the use of the ASTRA Bridge resulted in reductions in duration and costs (14% and 3% for the example), and a substantial reduction in user costs (51% for the example). Although more analysis is required for different types of refurbishment projects, the initial results indicate that the ASTRA Bridge may become an integral part of future highway bridge refurbishment projects.","ASTRA Bridge; construction automation; cost reduction; discrete-event simulation; highway bridges; infrastructure refurbishment; process modelling; traffic disturbances","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Design & Construction Management","","",""
"uuid:f6572162-644c-4a45-8ad0-abeb7e208d87","http://resolver.tudelft.nl/uuid:f6572162-644c-4a45-8ad0-abeb7e208d87","Conditioning continuous-time Markov processes by guiding","Corstanje, M.A. (TU Delft Statistics; Vrije Universiteit Amsterdam); van der Meulen, F.H. (TU Delft Statistics; Vrije Universiteit Amsterdam); Schauer, M.R. (TU Delft Statistics; University of Gothenburg)","","2022","A continuous-time Markov process X can be conditioned to be in a given state at a fixed time T>0 using Doob's h-transform. This transform requires the typically intractable transition density of X. The effect of the h-transform can be described as introducing a guiding force on the process. Replacing this force with an approximation defines the wider class of guided processes. For certain approximations the law of a guided process approximates–and is equivalent to–the actual conditional distribution, with tractable likelihood-ratio. The main contribution of this paper is to prove that the principle of a guided process, introduced in [M. Schauer, F. van der Meulen, and H. van Zanten, Guided proposals for simulating multi-dimensional diffusion bridges, Bernoulli 23 (2017a), pp. 2917–2950. doi:10.3150/16-BEJ833] for stochastic differential equations, can be extended to a more general class of Markov processes. In particular we apply the guiding technique to jump processes in discrete state spaces. The Markov process perspective enables us to improve upon existing results for hypo-elliptic diffusions.","conditional process; diffusions; Doob's h-transform; guided process; jump processes; landmark dynamics; Markov processes","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:a2d4e771-88ae-4255-bf65-3997c1d14d1d","http://resolver.tudelft.nl/uuid:a2d4e771-88ae-4255-bf65-3997c1d14d1d","Situation Awareness Prompts: Bridging the Gap between Supervisory and Manual Air Traffic Control","Kim, Munyung (Student TU Delft); Borst, C. (TU Delft Control & Simulation); Mulder, Max (TU Delft Control & Simulation)","","2022","To meet increasing safety and performance demands in air traffic control (ATC), more advanced automated systems will be introduced to assist human air traffic controllers. Some even foresee complete automation, with the human as a supervisor only to step-in when automation fails. Literature and empirical evidence suggest that supervising highly-automated systems can cause severe vigilance and complacency problems, out-of-the-loop situation awareness and transient workload peaks. These impair the ability for humans to successfully take over control. In this study, situation awareness prompts were used as a way to keep controllers cognitively engaged during their supervision of a fully automated ATC system. Results from an exploratory human-in-the-loop experiment, in which eight participants were instructed to monitor a fully automated ATC system in a simplified ATC context, show a significant decrease in workload peaks following an automation failure after being exposed to high-level SA questions. Although the selected method did not necessarily yield improved safety and manual control efficiency, results suggest that using situation awareness feedback in line with controllers' attention could be an avenue worth exploring further as a training tool.","cooperation; Decision making and cognitive processes; degree of automation; Human centred automation; Shared control","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:e23261e2-64ee-4e8c-9fc6-88701d0fe85c","http://resolver.tudelft.nl/uuid:e23261e2-64ee-4e8c-9fc6-88701d0fe85c","Determining Air Traffic Controller Proficiency: Identifying Objective Measures Using Clustering","de Jong, T. P. (Student TU Delft); Borst, C. (TU Delft Control & Simulation)","","2022","Air traffic control (ATC) is a complex and demanding job reserved for highly-trained professionals. Training ATC candidates is challenging as trainees are subjectively assessed by instructors who are biased by their own ways of working. As an effort to determine control expertise objectively, this study employed clustering techniques on an existing data set in which course and professional controllers participated in a medium-fidelity simulation experiment. Results identified a set of eight measures that formed two distinct and stable expertise clusters. A subsequent sensitivity analysis was able to reveal how far (or close) each course participant was positioned from the expert cluster and on which measures those participants deviated from the experts. At this stage, however, it is difficult to translate these results into specific advice on how to improve underdeveloped skills. Despite the small sample size and limited generalizability of the results in this exploratory study, the method appears to be a promising demonstration in determining objective factors that describe ATC expertise, warranting further research.","cooperation; Decision making and cognitive processes; degree of automation; Human centred automation; Shared control","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:cc57336e-c994-49a9-a570-f8ed87b37592","http://resolver.tudelft.nl/uuid:cc57336e-c994-49a9-a570-f8ed87b37592","A New Input-Parallel-Output-Series Three-Phase Hybrid Rectifier for Heavy-Duty Electric Vehicle Chargers","Qiang, Rui (Student TU Delft); Wu, Y. (TU Delft DC systems, Energy conversion & Storage); Soeiro, Thiago Batista (European Space Agency (ESA)); Granello, P. (Sapienza University of Rome); Qin, Z. (TU Delft DC systems, Energy conversion & Storage); Bauer, P. (TU Delft DC systems, Energy conversion & Storage)","","2022","This paper proposes a solution to the circuit topology of heavy-duty electric vehicle (HDEV) chargers. In light of the original hybrid rectifier, a new unidirectional Input-Parallel-Output-Series (IPOS) three-phase hybrid rectifier is proposed and analyzed. The IPOS topology is advantageous at ultra-high power rating to interface the next-generation HDEV batteries which require a high and wide output voltage range of 800~1500 V with available 600/1200V commercial semiconductors. Moreover, the proposed topology is efficient, cost-effective, and scalable with the grid input current harmonic components in compliance with the IEEE-519 standard. The benefits of the IPOS topology are supported by circuit derivation, control strategy, analytical modelling, simulation, and experimental verification.","AC-DC converter; fast charging; hybrid rectifier; partial power processing; power factor correction","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","DC systems, Energy conversion & Storage","","",""
"uuid:c510dee7-5e9b-479b-acfc-4dde6d0eb50c","http://resolver.tudelft.nl/uuid:c510dee7-5e9b-479b-acfc-4dde6d0eb50c","Complex Knowledge Base Question Answering: A Survey","Lan, Yunshi (East China Normal University); He, G. (TU Delft Web Information Systems); Jiang, Jinhao (Renmin University of China); Jiang, Jing (Singapore Management University); Xin Zhao, Wayne (Renmin University of China); Wen, Ji Rong (Renmin University of China)","","2022","Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB). Early studies mainly focused on answering simple questions over KBs and achieved great success. However, their performances on complex questions are still far from satisfactory. Therefore, in recent years, researchers propose a large number of novel methods, which looked into the challenges of answering complex questions. In this survey, we review recent advances in KBQA with the focus on solving complex questions, which usually contain multiple subjects, express compound relations, or involve numerical operations. In detail, we begin with introducing the complex KBQA task and relevant background. Then, we present two mainstream categories of methods for complex KBQA, namely semantic parsing-based (SP-based) methods and information retrieval-based (IR-based) methods. Specifically, we illustrate their procedures with flow designs and discuss their difference and similarity. Next, we summarize the challenges that these two categories of methods encounter when answering complex questions, and explicate advanced solutions as well as techniques used in existing work. After that, we discuss the potential impact of pre-trained language models (PLMs) on complex KBQA. To help readers catch up with SOTA methods, we also provide a comprehensive evaluation and resource about complex KBQA task. Finally, we conclude and discuss several promising directions related to complex KBQA for future research.","Cognition; Compounds; Knowledge base; knowledge base question answering; Knowledge based systems; natural language processing; question answering; Question answering (information retrieval); Semantics; survey; Task analysis; TV","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-10-26","","","Web Information Systems","","",""
"uuid:a846b380-e498-45c6-af55-cae3900236cd","http://resolver.tudelft.nl/uuid:a846b380-e498-45c6-af55-cae3900236cd","Processing of Fibre Reinforced Polymers by Controlled Radical Induced Cationic Frontal Polymerisation","Staal, Jeroen (Swiss Federal Institute of Technology); Smit, Edgar (Swiss Federal Institute of Technology); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies); Michaud, Véronique (Swiss Federal Institute of Technology)","Vassilopoulos, Anastasios P. (editor); Michaud, Véronique (editor)","2022","Radical Induced Cationic Frontal Polymerisation (RICFP) has recently been proposed as a promising strategy for processing of epoxide carbon fibre reinforced polymers. Control of the local heat balance is crucial towards the production of industrial-quality composites, which is typically achieved via controlling the heat generation. In this work we present a comprehensive overview of RICFP processing of cycloaliphatic epoxide composites with enhance heat insulation. The thermal initiating compound was identified as the main component to control heat generation, which correlated well with the front velocity. A processing window was defined as function of the fibre and initiator contents and composites with to 45.8% Vf were successfully produced. Optimisation of resulting mechanical properties was made possible by optimisation of the heat balance, with matrix glass transition temperatures of up to 187°C achieved for the used cycloaliphatic system. Post-curing was found beneficial to overcome suggested inhomogeneous curing due to the dual-scale nature of fabrics.","frontal polymerization; composite processing; fibre reinforced polymer composite","en","conference paper","EPFL Lausanne, Composite Construction Laboratory","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:9af9c49d-1a4c-4cc4-af40-d88b575a2093","http://resolver.tudelft.nl/uuid:9af9c49d-1a4c-4cc4-af40-d88b575a2093","Identifying problem frames in design conversation","Chandrasegaran, R.S.K. (TU Delft Methodologie en Organisatie van Design); Akdag Salah, A.A. (TU Delft Methodologie en Organisatie van Design)","Lloyd, P.A. (editor); Lockton, D. (editor); Lenzi, S.L. (editor); Hekkert, P.P.M. (editor); Oak, A. (editor); Sadaba, J. (editor); , P.A. Lloyd (editor)","2022","Design thinking concepts such as framing, storytelling, and co-evolution, have been widely identified as part of design activity though generally have been evidenced from manual coding of design conversations and close reading of transcripts. The increase in easy-to-use computational linguistic methodologies provides an opportunity not only to validate these concepts, but compare them to other kinds of activity in large datasets. However, the process of systematically identifying such concepts in design conversation is not straightforward. In this paper we explore methods of linguistic analysis for revealing problem frames within design process transcripts. We find that frames can be identified through n-grams with high mutual information scores, used at low frequencies, along with subsequent lexical entrainment. Furthermore, we show how frames are organised in primary and secondary structures. Our results represent a step forward in computationally determining frames in datasets featuring design, or design-like activity.","design process; framing; language; computational linguistic methods","en","conference paper","Design Research Society","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:bf4c7299-9c4b-4c2a-934d-8b2f25b9d0c6","http://resolver.tudelft.nl/uuid:bf4c7299-9c4b-4c2a-934d-8b2f25b9d0c6","Dual-scale visualization of resin flow for liquid composite molding processes","Teixidó, Helena (Swiss Federal Institute of Technology); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies); Michaud, Véronique (Swiss Federal Institute of Technology)","Vassilopoulos, Anastasios P. (editor); Michaud, Véronique (editor)","2022","Visualization of resin flow progression through fibrous preforms is often sought to elucidate flow patterns and validate models for filling prediction for liquid composite molding processes. Here, conventional X-ray radiography is compared to X-ray phase contrast technique to image in-situ constant flow rate impregnation of a non-translucent unidirectional carbon fabric. X-ray attenuation of the fluid phase was increased by using a ZnI2-based contrasting agent, leading to enough contrast between the liquid and the low density fibers. We proved the suitability of conventional X-ray transmission to visualize fluid paths by elucidating different flow patterns, spanning from capillary to viscous regimes and a macro-void entrapment phenomenon","Liquid Composite Molding (LCM); Resin flow; Saturation curve; Process monitoring; X-ray imaging","en","conference paper","EPFL Lausanne, Composite Construction Laboratory","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:22759e9a-510e-44c1-8ec8-d2bb8ae5c37b","http://resolver.tudelft.nl/uuid:22759e9a-510e-44c1-8ec8-d2bb8ae5c37b","Unlocking the Flexibility of District Heating Pipeline Energy Storage with Reinforcement Learning","Stepanovic, K. (TU Delft Algorithmics); Wu, J. (Flex Technologies); Everhardt, Rob (Flex Technologies); de Weerdt, M.M. (TU Delft Algorithmics)","","2022","","4th generation district heating;; combined heat and power economic dispatch; Markov decision process; Q-learning; pipeline energy storage; mixed-integer nonlinear program","en","abstract","","","","","","","","","","","Algorithmics","","",""
"uuid:b7c5a34d-4dfa-4484-8869-ee5edda4ea84","http://resolver.tudelft.nl/uuid:b7c5a34d-4dfa-4484-8869-ee5edda4ea84","Human Performance in Solving Multi-UAV Over-Constrained Dynamic Vehicle Routing Problems","Gupta, Ankit (Student TU Delft); Borst, C. (TU Delft Control & Simulation); Mulder, Max (TU Delft Control & Simulation)","","2022","For many logistics applications, such as drone delivery missions, finding an optimized network of routes yields a Vehicle Routing Problem (VRP). Such optimizations are mostly conducted offline prior to actual operations for reasons of computational complexity. In case disturbances arise during operations, for example a sudden loss of a vehicle, the VRP needs to be re-optimized in real-time and this raises concerns regarding obtaining a solution within time. In a previous study, it was demonstrated that humans, when supported through a human-machine interface, can quickly deal with these routing problems through satisficing, providing workable solutions. This paper extends our previous research by exposing human operators to an over-constrained VRP with different mission priorities and vehicle capabilities. Experiment results (n = 16) indicate that the mission type had the largest impact on how participants used the interface and what constraints were relaxed. In particular, during a search-and-rescue context the mission emphasis was put on delivering (medical) payload (close) to as many customers as possible, even if this would involve sacrificing vehicles and relaxing the depot constraint. Ethical aspects of the VRP are taken into account which algorithms do not by themselves, underlining the importance of involving humans in automation. Human operators complement algorithms with their context awareness, yielding more safe, resilient and responsible systems.","cognitive processes; Decision making; Human operator support","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:6549b9d6-c581-4179-9774-f6b5a81c06cc","http://resolver.tudelft.nl/uuid:6549b9d6-c581-4179-9774-f6b5a81c06cc","Peripheralization through mass housing urbanization in Hong Kong, Mexico City, and Paris","Kockelkorn, A.M. (TU Delft Space & Type; Universiteit Gent); Schmid, Christian (ETH Zürich); Streule, Monika (London School of Economics and Political Science); Wong, Kit Ping (Osaka Metropolitan University)","","2022","This article compares how state-initiated mass housing urbanization has contributed to processes of peripheralization in three very different historical and geopolitical settings: in Paris from the 1950s to the 1990s in Hong Kong from the 1950s to 2010s and in Mexico City from the 1990s to the 2010s. We understand mass housing urbanization as large-scale industrial housing production based on the intervention of state actors into the urbanization process which leads to the strategic re-organization of urban territories. In this comparison across space and time we focus particularly on how, when and to what degree this urbanization process leads to the peripheralization of settlements and entire neighbourhoods over the course of several decades. This long-term perspective allows us to evaluate not only the decisive turns and ruptures within governmental rationales but also the continuities and contradictions of their territorial effects. Finally, we develop a taxonomy of different modalities of peripheralization that might serve as a conceptual tool for further urban research.","Peripheralization; mass housing urbanization; urbanization processes; financialization of housing; neoliberal restructuring; territorial inequality; Hong Kong; Paris; Mexico City","en","journal article","","","","","","","","","","","Space & Type","","",""
"uuid:92d20484-cd58-48f3-a08c-f473fc8dd265","http://resolver.tudelft.nl/uuid:92d20484-cd58-48f3-a08c-f473fc8dd265","Validation of Integrated EV Chassis Controller Using a Geographically Distributed X-in-the-loop Network","Beliautsou, Viktar (Ilmenau University of Technology); Alfonso, Jesus (Instituto Tecnologico de Aragon); Giltay, J.N.P. (TU Delft Intelligent Vehicles); Büchner, Florian (Ilmenau University of Technology); Shyrokau, B. (TU Delft Intelligent Vehicles); Castellanos, Jose A. (Universidad de Zaragoza); Ivanov, Valentin (Ilmenau University of Technology)","","2022","This paper presents the validation of an integrated chassis controller that unites three groups of actuators for the electric vehicle (EV) with independent in-wheel electric motors (IWMs) for each wheel. Controlled actuators are the IWMs, the active suspension, and the braking system. The models of test benches and the designed architecture of the X-in-the-loop network are presented. The proposed design approach allows testing the developed controller on a vehicle model in real-time and on hardware components.","Control System; Testing processes; X-in-the-Loop (XIL); Hardware-in-the-loop (HIL); Electric vehicle; In-wheel motor","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-06-05","","","Intelligent Vehicles","","",""
"uuid:17b3a233-9f05-4909-a570-f34a39335fc6","http://resolver.tudelft.nl/uuid:17b3a233-9f05-4909-a570-f34a39335fc6","Semi-Automatic Perspective Lines from Paintings","Coudert-Osmont, Yoann (Lorraine University); Eisemann, E. (TU Delft Computer Graphics and Visualisation); Marroquim, Ricardo (TU Delft Computer Graphics and Visualisation)","Pintus, R. (editor); Ponchio, F. (editor)","2022","Perspective cues play an important role in painting analysis as it may unveil important characteristics about the painter's techniques and creation process. Nevertheless, extracting perspective lines and their corresponding vanishing points is usually a laborious manual task. Moreover, small variations in the lines may lead to large variations in the vanishing points. In this work, we propose a semi-automatic method to extract perspective lines from paintings in order to mitigate the human variability factor and reduce the workload.","Computing methodologies; Image processing; Applied computing; Fine arts","en","conference paper","The Eurographics Association","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:0843a9a2-b912-4849-8b72-ce4f541ca259","http://resolver.tudelft.nl/uuid:0843a9a2-b912-4849-8b72-ce4f541ca259","Gaussian Process based Feedforward Control for Nonlinear Systems with Flexible Tasks: With Application to a Printer with Friction","Van Meer, Max (Eindhoven University of Technology); Poot, Maurice (Eindhoven University of Technology); Portegies, Jim (Eindhoven University of Technology); Oomen, T.A.E. (TU Delft Team Jan-Willem van Wingerden; Eindhoven University of Technology)","","2022","Feedforward control is essential to achieving good tracking performance in positioning systems. The aim of this paper is to develop an identification strategy for inverse models of systems with nonlinear dynamics of unknown structure using input-output data, which can be used to generate feedforward signals for a-priori unknown tasks. To this end, inverse systems are regarded as noncausal nonlinear finite impulse response (NFIR) systems, and modeled as a Gaussian Process with a stationary kernel function that imposes properties such as smoothness. The approach is validated experimentally on a consumer printer with friction and shown to lead to improved tracking performance with respect to linear feedforward.","Feedforward Control; Gaussian Process regression; Grey box modelling; Identification for control; Nonlinear system identification","en","journal article","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:1a7b9b77-2240-4e0c-b392-a951ab1d9451","http://resolver.tudelft.nl/uuid:1a7b9b77-2240-4e0c-b392-a951ab1d9451","Event-Based Communication in Distributed Q-Learning","Jarne Ornia, D. (TU Delft Team Manuel Mazo Jr); Mazo, M. (TU Delft Team Manuel Mazo Jr)","","2022","We present an approach to reduce the communication of information needed on a Distributed Q-Learning system inspired by Event Triggered Control (ETC) techniques. We consider a baseline scenario of a Distributed Q-Learning problem on a Markov Decision Process (MDP). Following an event-based approach, N agents sharing a value function explore the MDP and compute a trajectory-dependent triggering signal which they use distributedly to decide when to communicate information to a central learner in charge of computing updates on the action-value function. These decision functions form an Event Based distributed Q learning system (EBd-Q), and we derive convergence guarantees resulting from the reduction of communication. We then apply the proposed algorithm to a cooperative path planning problem, and show how the agents are able to learn optimal trajectories communicating a fraction of the information. Additionally, we discuss what effects (desired and undesired) these event-based approaches have on the learning processes studied, and how they can be applied to more complex multi-agent systems.","Q-learning; Markov processes; Control systems; Trajectory; Multi-agent systems; Convergence; Event-Triggered Control; Reinforcement Learning; Distributed Systems","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-10","","","Team Manuel Mazo Jr","","",""
"uuid:b1b5e2f4-4a7f-45c0-a7ca-0f2eb79c3755","http://resolver.tudelft.nl/uuid:b1b5e2f4-4a7f-45c0-a7ca-0f2eb79c3755","Forks in the road: Critical design moments for identifying key processes in stakeholder interaction","Pearce, B.J. (TU Delft Policy Analysis); Dallo, Irina (ETH Zürich); Choi, Victoria; Freihardt, Jan; Middel, Cédric","","2022","While the importance of transdisciplinary (Td) processes as a means to address societal problems is well-established, guidance for the intentional design of stakeholder interactions to meet specific goals, under different conditions and contexts, remains less explored. We propose the concept of critical design moments (CDMs) as a lens through which to identify key processes in the design of stakeholder interactions that affect the relevance and impact of its outcomes. We demonstrate how an approach using CDMs can help to make explicit not only the goals of stakeholder interactions, but also how these goals might be met through the process design of specific activities orienting these interactions. The CDMs were identified as part of the implementation of a Td winter school for early career researchers to provide them with real-world experiences of interacting with stakeholders and local residents of a community. This work provides an approach for how Td stakeholder interactions can be designed in other Td contexts.","critical design moments; design thinking; process design; stakeholder interaction; transdisciplinary processes","en","journal article","","","","","","","","","","","Policy Analysis","","",""
"uuid:7aebe9f5-a752-4df2-a6fa-238df0ed779d","http://resolver.tudelft.nl/uuid:7aebe9f5-a752-4df2-a6fa-238df0ed779d","A Cross-Field Review of State Abstraction for Markov Decision Processes","Congeduti, E. (TU Delft Computer Science & Engineering-Teaching Team; TU Delft Interactive Intelligence); Oliehoek, F.A. (TU Delft Interactive Intelligence)","","2022","Complex real-world systems pose a significant challenge to decision making: an agent needs to explore a large environment, deal with incomplete or noisy information, generalize the experience and learn from feedback to act optimally. These processes demand vast representation capacity, thus putting a burden on the agent’s limited computational and storage resources. State abstraction enables effective solutions by forming concise representations of the agents world. As such, it has been widely investigated by several research communities which have produced a variety of different approaches. Nonetheless, relations among them still remain unseen or roughly defined. This hampers potential applications of solution methods whose scope remains limited to the specific abstraction context for which they have been designed. To this end, the goal of this paper is to organize the developed approaches and identify connections between abstraction schemes as a fundamental step towards methods generalization. As a second contribution we discuss general abstraction properties with the aim of supporting a unified perspective for state abstraction.","State Abstraction; Model Irrelevance; Robust Reinforcement Learning; Bounded Parameters Markov Decision Processes","en","conference paper","","","","","","","","","","","Computer Science & Engineering-Teaching Team","","",""
"uuid:93e094ca-0bf7-4fe3-b4fa-fa3831dd4006","http://resolver.tudelft.nl/uuid:93e094ca-0bf7-4fe3-b4fa-fa3831dd4006","Distributed Demand Side Management With Stochastic Wind Power Forecasting","Scarabaggio, P. (Polytechnic University of Bari); Grammatico, S. (TU Delft Team Bart De Schutter); Carli, Raffaele (Polytechnic University of Bari); Dotoli, Mariagrazia (Polytechnic University of Bari)","","2022","In this article, we propose a distributed demand-side management (DSM) approach for smart grids taking into account uncertainty in wind power forecasting. The smart grid model comprehends traditional users as well as active users (prosumers). Through a rolling-horizon approach, prosumers participate in a DSM program, aiming at minimizing their cost in the presence of uncertain wind power generation by a game theory approach. We assume that each user selfishly formulates its grid optimization problem as a noncooperative game. The core challenge in this article is defining an approach to cope with the uncertainty in wind power availability. We tackle this issue from two different sides: by employing the expected value to define a deterministic counterpart for the problem and by adopting a stochastic approximated framework. In the latter case, we employ the sample average approximation (SAA) technique, whose results are based on a probability density function (PDF) for the wind speed forecasts. We improve the PDF by using historical wind speed data, and by employing a control index that takes into account the weather condition stability. Numerical simulations on a real data set show that the proposed stochastic strategy generates lower individual costs compared to the standard expected value approach.","Demand-side management (DSM); model predictive control; Optimization; sample average approximation (SAA); smart grid; Smart grids; stochastic optimization.; Stochastic processes; Uncertainty; Wind forecasting; Wind power generation; Wind speed","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Team Bart De Schutter","","",""
"uuid:16b0fa60-ab59-4326-8394-4d746673a277","http://resolver.tudelft.nl/uuid:16b0fa60-ab59-4326-8394-4d746673a277","Virtual sensing in an onshore wind turbine tower using a Gaussian process latent force model","Bilbao Nieva, J.A. (Energie Baden-Württemberg; Student TU Delft); Lourens, E. (TU Delft Dynamics of Structures; TU Delft Offshore Engineering); Schulze- Bonhage, Andreas (Energie Baden-Württemberg); Ziegler, Lisa (Energie Baden-Württemberg)","","2022","Wind turbine towers are subjected to highly varying internal loads, characterized by large uncertainty. The uncertainty stems from many factors, including what the actual wind fields experienced over time will be, modeling uncertainties given the various operational states of the turbine with and without controller interaction, the influence of aerodynamic damping, and so forth. To monitor the true experienced loading and assess the fatigue, strain sensors can be installed at fatigue-critical locations on the turbine structure. A more cost-effective and practical solution is to predict the strain response of the structure based only on a number of acceleration measurements. In this contribution, an approach is followed where the dynamic strains in an existing onshore wind turbine tower are predicted using a Gaussian process latent force model. By employing this model, both the applied dynamic loading and strain response are estimated based on the acceleration data. The predicted dynamic strains are validated using strain gauges installed near the bottom of the tower. Fatigue is subsequently assessed by comparing the damage equivalent loads calculated with the predicted as opposed to the measured strains. The results confirm the usefulness of the method for continuous tracking of fatigue life consumption in onshore wind turbine towers.","Fatigue load monitoring; Gaussian process; input estimation; latent force models; state estimation","en","journal article","","","","","","","","","","","Dynamics of Structures","","",""
"uuid:3572e978-5f41-4aba-a1f6-e17d3ad7edc2","http://resolver.tudelft.nl/uuid:3572e978-5f41-4aba-a1f6-e17d3ad7edc2","Value Change, Value Conflict, and Policy Innovation: Understanding the Opposition to the Market-Based Economic Dispatch of Electricity Scheme in India Using the Multiple Streams Framework","Goyal, N. (TU Delft Organisation & Governance); Iychettira, K.K. (Indian Institute of Technology Delhi; Harvard Kennedy School)","","2022","As policy innovation is essential for upscaling responsible innovation, understanding its relationship to value change(s) occurring or sought in sociotechnical systems is imperative. In this study, we ask: what are the different types of values in the policy process? And, how does value change influence policy innovation? We propose a disaggregation of values and value change based on a four-stream variant of the multiple streams framework (MSF), a conceptual lens increasingly used for explaining policy innovation in sociotechnical transitions. Specifically, we posit that the values that ‘govern’ problem framing, policy design, political decision making, and technological diffusion can evolve relatively independently, potentially leading to value conflict. We apply this framework to the ongoing case of the market-based economic dispatch of electricity (MBED) policy in the Indian energy transition using content analysis. We find that the MBED scheme—with its emphasis on efficiency (problem), economic principles (policy), low-cost dispatch (technology), and centralization (politics)—attempts value change in each stream. Each instance of value change is, however, widely contested, with the ensuing value conflicts resulting in significant opposition to this policy innovation. We conclude that a disaggregation of values based on the MSF can facilitate an analysis of value change and value conflict in sociotechnical transitions and lay the foundation for systematically studying the relationships among technological change, value change, and policy change.","Indian energy transition; Market-based economic dispatch of electricity (MBED); Multiple streams framework (MSF); Policy innovation; Policy process; Renewable energy; Value change; Value conflict","en","journal article","","","","","","","","","","","Organisation & Governance","","",""
"uuid:a6da8efd-9052-4909-a28f-b3b78b6d4c5f","http://resolver.tudelft.nl/uuid:a6da8efd-9052-4909-a28f-b3b78b6d4c5f","A Novel Approach to Unambiguous Doppler Beam sharpening for Forward-looking MIMO Radar","Yuan, S. (TU Delft Microwave Sensing, Signals & Systems); Aubry, P.J. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2022","The ambiguity problem of targets in Doppler beam sharpening (DBS) with forward-looking radar is considered. While DBS is proposed earlier to improve the angular resolution of the radar while keeping the antenna aperture size limited, such a solution suffers from ambiguities in the case of targets positioned symmetrically with respect to the platform movement. To address this problem, an approach named unambiguous Doppler-based forward-looking multiple-input multiple-output (MIMO) radar beam sharpening scan (UDFMBSC) is proposed, based on the combination of MIMO processing and DBS. The performance of the proposed method is compared to existing approaches using simulated data with point-like and extended targets. The method is successfully verified using experimental data.","Beam scan; Doppler beam sharpening; Doppler effect; Doppler radar; Forward-looking radar; MIMO radar processing; Radar; Radar antennas; Radar imaging; Sensors; Signal resolution","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-05-01","","","Microwave Sensing, Signals & Systems","","",""
"uuid:858f8330-0719-4de7-ac38-04b04d128c8e","http://resolver.tudelft.nl/uuid:858f8330-0719-4de7-ac38-04b04d128c8e","Scaling Agile Company-Wide: The Organizational Challenge of Combining Agile-Scaling Frameworks and Enterprise Architecture in Service Companies","Van Wessel, Robert M. (Erasmus Universiteit Rotterdam); Kroon, Philip; de Vries, H.J. (TU Delft Values Technology and Innovation; TU Delft Economics of Technology and Innovation; Erasmus Universiteit Rotterdam)","","2022","Many organizations have embraced agile methods. Studies show a trend of large-scale application of agile frameworks company-wide. Emergent architecture design as part of an agile approach is effective at the project level but causes issues when services need to interact seamlessly at the enterprise level. Enterprise architecture (EA) can provide such coherence. Combining the scaling agile methods with EA is challenging. However, such a combination could benefit from the flexibility that agile approaches offer and provide the consistency and long-term focus that EA pursues. This article uses the longitudinal case study research to explore how organizations can effectively govern Agile and EA in large-scale agile transformations. Our case analysis shows that methods for scaling Agile do not provide sufficient guidance to properly handle the transformation from existing EA practices to an Agile EA combination company-wide. We propose how EA can be applied effectively in large-scale agile transformations despite the two seemingly conflicting approaches of Agile and EA. Based on our findings, we propose a conceptual model for future research that incorporates factors that take EA into account in the governance of agile-scaling frameworks. Our findings extend current literature on coordination mechanisms between architects and agile teams in large-scale agile transformations, thereby balancing emergent and intentional architectures.","Agile methods; agile-scaling frameworks (ASFs); collaborations in technology management; enterprise architecture (EA); new service development; organizational change; project management; software process management","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","Values Technology and Innovation","Economics of Technology and Innovation","","",""
"uuid:c3b4c67b-f871-4f46-9a2a-e726f7c6e3db","http://resolver.tudelft.nl/uuid:c3b4c67b-f871-4f46-9a2a-e726f7c6e3db","Towards a framework for urban landscape co-design: Linking the participation ladder and the design cycle","Gaete Cruz, M. (TU Delft Urban Development Management); Ersoy, A. (TU Delft Urban Development Management); Czischke, D.K. (TU Delft Real Estate Management); van Bueren, Ellen (TU Delft Management in the Built Environment)","","2022","With the increasing social and ecological pressures on urban settlements, re-thinking how we produce them becomes a growing concern. Due to the diversity of actors across sectors and backgrounds involved in such design processes, collaboration is of utmost importance. Co-design can thus play a crucial role in integrating aims and knowledge as an evolving institutional process toward feasible, suitable and legitimate projects. While many studies on co-design focus on one-time activities, little attention is paid to conceptualising how such processes occur, involving several actors in dynamic participatory ways. We propose a Co-Design Framework and suggest that collaboration is achieved at many levels within different design steps in the process. Analysing three Chilean public space co-design processes through the lens of our framework, we highlight the intrinsic diversity of such an approach. This study posits that three co-design arenas interact (strategic, transdisciplinary, and socio-cultural) according to their main aims to enable, inform, and legitimise the projects accordingly. Our framework contributes to conceptualising and analyzing co-design and may also be useful to plan and develop such processes in academia and practice.","Chile; Co-design; collaborative design; design process; public space; urban landscape","en","journal article","","","","","","","","","","Management in the Built Environment","Urban Development Management","","",""
"uuid:1a75ec15-2a37-40e3-98fa-dbdeff906d2a","http://resolver.tudelft.nl/uuid:1a75ec15-2a37-40e3-98fa-dbdeff906d2a","Online Edge Flow Imputation on Networks","Money, Rohan (University of Agder); Krishnan, Joshin (Simula Metropolitan Center for Digital Engineering); Beferull-Lozano, Baltasar (University of Agder); Isufi, E. (TU Delft Multimedia Computing)","","2022","An online algorithm for missing data imputation for networks with signals defined on the edges is presented. Leveraging the prior knowledge intrinsic to real-world networks, we propose a bi-level optimization scheme that exploits the causal dependencies and the flow conservation, respectively via <italic>(i)</italic> a sparse line graph identification strategy based on a group-Lasso and <italic>(ii)</italic> a Kalman filtering-based signal reconstruction strategy developed using simplicial complex (SC) formulation. The advantages of this first SC-based attempt for time-varying signal imputation have been demonstrated through numerical experiments using EPANET models of both synthetic and real water distribution networks.","Kalman filters; Laplace equations; Line Graph; Missing Flow Imputation; Optimization; Reactive power; Signal processing algorithms; Signal reconstruction; Simplicial Complex; Time series analysis; Topological Signal Processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Multimedia Computing","","",""
"uuid:a55d958b-10a3-4c83-bc35-a27d8de14c45","http://resolver.tudelft.nl/uuid:a55d958b-10a3-4c83-bc35-a27d8de14c45","1D-DGAN-PHM: A 1-D denoising GAN for Prognostics and Health Management with an application to turbofan","Lourenço Baptista, M. (TU Delft Air Transport & Operations); Henriques, Elsa M.P. (Lisbon Technical University)","","2022","The performance of prognostics is closely related to the quality of condition monitoring signals (e.g., temperature, pressure, or vibration signals), which reveal the degradation of the system of interest. However, typical condition monitoring signals include noise and outliers. Disentangling noise from these signals is essential to obtain the actual degradation trajectories. Different denoising methods have been proposed in prognostics. Conventional denoising methods have low complexity but usually do not preserve edge information and do not involve physical considerations. A promising deep learning approach is denoising generative models. This approach is popular in Computer Vision, which has been shown to outperform other classical techniques but has seldom been used in prognostics on 1-D signals. In this paper, we propose the 1-D Denoising Generative Adversarial Network for Prognostics and Health Management (1D-DGAN-PHM). The 1D-DGAN-PHM is trained on synthetic data generated by a custom data generator that infuses physics-of-failure knowledge in paired samples of noisy and noise-free trajectories. The network consists of two components, a denoising generator and a discriminator. The denoising generator aims to learn to denoise a 1-D input signal. The discriminator guides the learning by comparing noise-free signals with signals from the denoising generator. Advantages of the 1D-DGAN-PHM include the physics-of-failure information in the synthetic data generator and the model sophistication. In this work, we apply the 1D-DGAN-PHM to denoise the raw signals derived from NASA's C-MAPSS simulator of an aircraft turbofan engine. Baseline methods are Moving Average, Median filter, Savitzky–Golay filter, and a denoising autoencoder. The 1D-DGAN-PHM produces smooth trajectories and preserves the initial linear degradation of the signals. The 1D-DGAN-PHM has the most significant improvement in prognosability (on average, 0.73 to 0.81). Data from the 1D-DGAN-PHM resulted in the best MAE (29 to 25 cycles) and RMSE (score of 39 to 36) for a Random Forest. The code is publicly available at 1D-DGAN-PHM.","Denoising; Edge-preserving method; Failure prognostics; Generative Adversarial Network; Prognostics and Health Management; Signal processing","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:ae58263f-8732-48dc-91d4-48898852c710","http://resolver.tudelft.nl/uuid:ae58263f-8732-48dc-91d4-48898852c710","KrakenOnMem: A Memristor-Augmented HW/SW Framework for Taxonomic Profiling","Shahroodi, Taha; Zahedi, M.Z. (TU Delft Computer Engineering); Abhairaj Singh, A. (TU Delft Electrical Engineering, Mathematics and Computer Science); Wong, J.S.S.M. (TU Delft Computer Engineering); Hamdioui, S. (TU Delft Quantum & Computer Engineering)","","2022","State-of-the-art taxonomic profilers that comprise the first step in larger-context metagenomic studies have proven to be computationally intensive, i.e., while accurate, they come at the cost of high latency and energy consumption. Table Lookup operation is a primary bottleneck of today's profilers. In this paper, we first propose TL-PIM, a hardware accelerator based on the processing-in-memory (PIM) paradigm to accelerate Table Lookup. TL-PIM leverages the in-memory compute capability of emerging memory technologies along with intelligent data mapping. Then, we integrate TL-PIM into Kraken2, a state-of-the-art metagenomic profiler, and build an HW/SW co-designed profiler, called KrakenOnMem. Results from a silicon-based prototype of our emerging memory validate the design and required operations on a smaller scale. Our large-scale calibrated simulations show that KrakenOnMem can provide an average of 61.3% speedup compared to original Kraken2 for end-to-end profiling. Additionally, our design improves the energy consumption by orders of magnitude compared to the original Kraken2 while incurring a negligible area overhead.","(Hash) table lookup; Emerging memories; In memory Processing; Kraken2; Taxonomic profiling","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:e23fee38-2286-4ddd-b04a-51bd6baff6a3","http://resolver.tudelft.nl/uuid:e23fee38-2286-4ddd-b04a-51bd6baff6a3","Proof of Delivery Smart Contract for Performance Measurements","Madhwal, Yash (Skolkovo Institute of Science and Technology); Borbon-Galvez, Yari (LIUC Università Carlo Cattaneo); Etemadi, N. (TU Delft Safety and Security Science; LIUC Università Carlo Cattaneo); Yanovich, Yury (Skolkovo Institute of Science and Technology); Creazza, Alessandro (LIUC Università Carlo Cattaneo)","","2022","The growth of the enterprise blockchain research supporting supply chain management calls for investigations of their impact and mindfulness of their design, use cases, and pilots. With a blockchain design for the Proof of Delivery (PoD) process management, this paper contributes to learning about performance measurement and the transaction costs implications during the development and application of smart contracts. An experimental design science approach is applied to develop an open-source blockchain to explore ways to make the delivery processes more efficient, the proof of delivery more reliable, and the performance measurements more accurate. The theory of Transaction Costs is applied to evaluate the cost implications of the adoption of smart contracts in the management of the PoD. The findings show that smart contracts make the delivery processes more efficient and proof of delivery more reliable. Yet, the methods and metrics are too complex and qualitative, limiting the smart contract's capability to measure performance. Our findings indicate potential transaction costs reduction by implementing a blockchain-based performance measurement. The complexities of the delivery process and proof of delivery call for pre-contractual steps to identify the processes and performance metrics to design blockchains. Smart contracts need further development and digital aids to handle qualitative inspections and proof of delivery generation during the delivery process. The blockchain requires the system's capacity to record off-chain transactions, such as in case of disputes resolutions. The authors extended blockchain research beyond the theoretical level, designing an open-source blockchain for supply chain management within the use case, pilot design, and case study.","Actual Time of Arrival; Blockchain; Blockchains; Costs; Delivery Performance; Delivery Process; Distributed ledger; Estimated Time of Arrival; Ethereum; Measurement; Proof of Delivery; Smart Contract; Smart contracts; Supply chain management; Supply chains; Transaction Costs Theory","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:31ceb902-501e-4ca6-a1eb-2873cff0b420","http://resolver.tudelft.nl/uuid:31ceb902-501e-4ca6-a1eb-2873cff0b420","Rethinking the design of a 2-methoxy-2-methyl-heptane process by unraveling the true thermodynamics and kinetics","Patraşcu, Iulian (Politehnica University of Bucharest); Bîldea, Costin Sorin (Politehnica University of Bucharest); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering; The University of Manchester)","","2022","Among other fuel additives — such as MTBE, ETBE, or TAME — 2-methoxy-2-methyl heptane (MMH) can increase the fuel octane number and reduce the CO emission. MMH can be obtained through the exothermal etherification of 2-methyl-1-heptene and methanol. Lately, many researchers have developed more and more efficient processes considering the kinetics corresponding to an endothermal reaction. However, in this work we demonstrate that the reaction is actually quite exothermal, and this has strong impact on the designed process. Also, the vapor–liquid equilibrium data predicted by UNIQUAC model for 2-methoxy-2-methyl heptane and 2-methyl-2-heptanol mixture reveals that product purification is more difficult, and it requires more energy to recover and obtain MMH with high purity. Considering these aspects, the 54.87 ktpy process developed in this paper is more realistic and energy intensive (1.82 kW h/kg MMH), with a TAC of 5.3 M$/year. The controllability of the process is proven for ±20% changes of 2-methoxy-2-methyl-heptane production rate.","2-Methoxy-2-methyl-heptane; Plantwide control; Process design; Process simulation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","ChemE/Product and Process Engineering","","",""
"uuid:c6a139a3-52af-4d40-a960-6c2bea48af75","http://resolver.tudelft.nl/uuid:c6a139a3-52af-4d40-a960-6c2bea48af75","A vision for design in the era of collective computing","Jung, Jiwon (TU Delft Methodologie en Organisatie van Design); Kleinsmann, M.S. (TU Delft Methodologie en Organisatie van Design); Snelders, H.M.J.J. (TU Delft Methodologie en Organisatie van Design)","","2022","In this study, we envision engineering design activities for collective computing, an upcoming era of complex systems of massive social interaction through a wide variety of connected computing devices. A literature review reveals how collective computing, compared to the previous eras of personal and ubiquitous computing, may lead to new design tasks and design processes, as well as new roles for designers. Based on this review, new design activities for the collective computing era are envisioned, and further revised in an interview study with 24 informants. The result is a vision for design in the collective computing era, with actionable guidance for designers in terms of a coherent set of new design activities proposed in relation to advances in computing.","design futures; Industrial design; modes of design; prescriptive models of the design process; system(s) design","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:d1e1e961-1eb5-4148-97d9-c9b375d77910","http://resolver.tudelft.nl/uuid:d1e1e961-1eb5-4148-97d9-c9b375d77910","Random Assignment Versus Fixed Assignment in Multilevel Importance Splitting for Estimating Stochastic Reach Probabilities","Ma, H. (TU Delft Air Transport & Operations; Northwestern Polytechnical University); Blom, H.A.P. (TU Delft Air Transport & Operations)","","2022","This paper focuses on estimating reach probability of a closed unsafe set by a stochastic process. A well-developed approach is to make use of multi-level MC simulation, which consists of encapsulating the unsafe set by a sequence of increasing closed sets and conducting a sequence of MC simulations to estimate the reach probability of each inner set from the previous set. An essential step is to copy (split) particles that have reached the next level (inner set) prior to conducting a MC simulation to the next level. The aim of this paper is to prove that the variance of the multi-level MC estimated reach probability under fixed assignment splitting is smaller or equal than under random assignment splitting methods. The approaches are illustrated for a geometric Brownian motion example.","Interacting particles; Monte Carlo method; Multi-dimensional diffusion process; Multilevel importance splitting; Reach probability","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:987a714d-5cf0-451c-ab29-84f843c78704","http://resolver.tudelft.nl/uuid:987a714d-5cf0-451c-ab29-84f843c78704","Online Multi-Robot Task Assignment with Stochastic Blockages","Wilde, N. (TU Delft Learning & Autonomous Control); Alonso-Mora, J. (TU Delft Learning & Autonomous Control)","","2022","In this paper we study the multi-robot task assignment problem with tasks that appear online and need to be serviced within a fixed time window in an uncertain environment. For example, when deployed in dynamic, human-centered environments, the team of robots may not have perfect information about the environment. Parts of the environment may temporarily become blocked and blockages may only be observed on location. While numerous variants of the Canadian Traveler Problem describe the path planning aspect of this problem, few work has been done on multi-robot task allocation (MRTA) under this type of uncertainty. In this paper, we introduce and theoretically analyze the problem of MRTA with recoverable online blockages. Based on a stochastic blockage model, we compute offline tours using the expected travel costs for the online routing problem. The cost of the offline tours is used in a greedy task assignment algorithm. In simulation experiments we highlight the performance benefits of the proposed method under various settings.","Costs; Uncertainty; Heuristic algorithms; Computational modeling; Stochastic processes; Routing; Robustness","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Learning & Autonomous Control","","",""
"uuid:856a5681-2d68-4f42-9cc6-71c26a06b10e","http://resolver.tudelft.nl/uuid:856a5681-2d68-4f42-9cc6-71c26a06b10e","Prediction-Based Reachability Analysis for Collision Risk Assessment on Highways","Wang, X. (TU Delft Learning & Autonomous Control); Li, Z. (TU Delft Transport and Planning; Beijing Institute of Technology); Alonso-Mora, J. (TU Delft Learning & Autonomous Control); Wang, M. (TU Delft Transport and Planning; Technische Universität Dresden)","","2022","Real-time safety systems are crucial components of intelligent vehicles. This paper introduces a prediction-based collision risk assessment approach on highways. Given a point mass vehicle dynamics system, a stochastic forward reachable set considering two-dimensional motion with vehicle state probability distributions is firstly established. We then develop an acceleration prediction model, which provides multi-modal probabilistic acceleration distributions to propagate vehicle states. The collision probability is calculated by summing up the probabilities of the states where two vehicles spatially overlap. Simulation results show that the prediction model has superior performance in terms of vehicle motion position errors, and the proposed collision detection approach is agile and effective to identify the collision in cut-in crash events.","Road transportation; Intelligent vehicles; Simulation; Stochastic processes; Predictive models; Probability distribution; Risk management","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-01-19","","Transport and Planning","Learning & Autonomous Control","","",""
"uuid:aaac9208-ae4d-4d82-8bfb-c04be7b20505","http://resolver.tudelft.nl/uuid:aaac9208-ae4d-4d82-8bfb-c04be7b20505","A virtual experiment for measuring system resilience: a case of chemical process systems","Sun, H. (TU Delft Safety and Security Science; China University of Petroleum (East China)); Yang, M. (TU Delft Safety and Security Science; Universiti Teknologi Malaysia); Wang, Haiqing (China University of Petroleum (East China))","","2022","Resilience is an emergent property of a system, which changes with various internal and external factors. Resilience is also a hidden property of a system that cannot be observed. Thus, experiments should be performed for a given system to measure its resilience. However, physical experiments are practically impossible. Inspired by the tensile test for the stress-strain curve in Material Science, this paper proposes a virtual experiment for measuring system resilience and applies it to a chemical process system. The physical parameters of system resilience of a process system are mapped to those of material resilience. A process system is viewed as a 'specimen' in this experiment. The system performance variation caused by disruptions is seen as the displacement of the specimen caused by the applied load. In absorption phase, the decrease speed of system performance is determined by the failure rate of components under disruptive condition. Response time, including fault diagnosis time and resource allocation time, is used to represent adaptation ability. Restoration ability depends on repair rate of components. For simplicity purpose, the proposed method is applied to resilience assessment of a release prevention barrier system used in the Chevron Richmond refinery crude unit and its associated upstream process.","resilience; process safety; chemical process system; hazardous operation","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:67dcf697-9326-4cad-b39a-c3d28ff7e03c","http://resolver.tudelft.nl/uuid:67dcf697-9326-4cad-b39a-c3d28ff7e03c","A STAMP-based approach to quantitative resilience assessment of chemical process systems","Sun, H. (TU Delft Safety and Security Science; China University of Petroleum (East China)); Wang, Haiqing (China University of Petroleum (East China)); Yang, M. (TU Delft Safety and Security Science); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven)","","2022","Chemical process systems (CPSs) involve complex dynamic processes. Besides, the emergent and uncertain hazards and disruptions cannot be identified entirely and prevented by conventional methods. In those situations, resilience for CPSs plays an essential role in absorbing, adapting to disruptions, and restoring from damages. Systemic modeling plays a vital role in assessing resilience. A system-based analysis model, system-theoretic accident model, and process (STAMP) can provide a robust framework. This paper develops a comprehensive methodology to systematically model and assess system resilience. The STAMP is employed to model and analyze the system safety of a process system. A new method of dynamic resilience assessment is then proposed to quantify the resilience of the system. The proposed method is applied to the diesel oil hydrogenation system. The results show that it quantifies the resilience of complex process systems considering human and organizational factors in a dynamic manner.","Chemical process systems; Resilience assessment; STAMP; Systemic","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:a52c8c3a-4c61-4873-b484-8ff11a5a76b3","http://resolver.tudelft.nl/uuid:a52c8c3a-4c61-4873-b484-8ff11a5a76b3","Microstructure-informed deep convolutional neural network for predicting short-term creep modulus of cement paste","Liang, M. (TU Delft Materials and Environment); Gan, Y. (TU Delft Materials and Environment); Chang, Z. (TU Delft Materials and Environment); Wan, Z. (TU Delft Materials and Environment); Schlangen, E. (TU Delft Materials and Environment); Šavija, B. (TU Delft Materials and Environment)","","2022","This study aims to provide an efficient alternative for predicting creep modulus of cement paste based on Deep Convolutional Neural Network (DCNN). First, a microscale lattice model for short-term creep is adopted to build a database that contains 18,920 samples. Then, 3 DCNNs with different consecutive convolutional layers are built to learn from the database. Finally, the performance of DCNNs is tested on unseen testing samples. The results show that the DCNNs can achieve high accuracy in the testing set, with the R2 all higher than 0.96. The distribution of creep modulus predicted by the DCNNs coincides with that of the original data. Furthermore, through analyzing the feature maps, it is found that the DCNNs can correctly capture the local importance of different microstructural phases. The DCNN allows therefore prediction of the creep modulus based on microstructural input, which saves computational resources of segmentation procedure and multiple incremental FEM calculations.","Cement paste; Convolutional neural network; Creep; Image processing; Prediction","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:5df90966-44ab-41ef-a011-f776749d0d54","http://resolver.tudelft.nl/uuid:5df90966-44ab-41ef-a011-f776749d0d54","GPR-assisted evaluation of probabilistic fatigue crack growth in rib-to-deck joints in orthotropic steel decks considering mixed failure models","Heng, J. (TU Delft Steel & Composite Structures; Shenzhen University); Zhou, Zhixiang (Shenzhen University); Zou, Yang (Chongqing Jiaotong University); Kaewunruen, Sakdirat (University of Birmingham)","","2022","Rib-to-deck (RD) welded joints in orthotropic steel decks (OSDs) of bridges demonstrates two major fatigue failure models, including the toe-to-deck (TTD) cracking and root-to-deck (RTD) cracking. Generally, the sole failure model is employed in the fatigue assessment of RD joints, causing a hot dispute on the dominant failure model. In this paper, the fatigue crack growth (FCG) in RD joints has been evaluated considering uncertainties and mixed failure models. A probabilistic fatigue crack growth (PFCG) model is at first established for the RD joint, in which two crack-like initial flaws are assumed at the weld toe and root of the RD joint. After that, the gaussian process regression is used to assist and boost the PFCG simulation. Then, the PFCG model is implemented on a typical OSD with the random traffic model. Finally, the result of the PFCG model is discussed in detail, including the failure model, fatigue reliability and life prediction, and crack size evolution. It is revealed that both the TTD and RTD cracking models have a notable contribution to fatigue failure and could not be ignored. More crucial, a remarkable reduction can be observed in the fatigue reliability of RD joints when considering mixed failure models. This study not only highlights the influence of mixed failure models on the fatigue performance of welded joints, but also provide an insight into the application of novel machine learning tools in solving the traditional structural issue.","Gaussian process regression; Mixed failure models; Orthotropic steel deck; Probabilistic fatigue crack growth; Rib-to-deck joint","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Steel & Composite Structures","","",""
"uuid:d7304be2-a7f9-4d52-a9e5-88ac0de05fc0","http://resolver.tudelft.nl/uuid:d7304be2-a7f9-4d52-a9e5-88ac0de05fc0","Realistic correction of sky-coloured points in Mobile Laser Scanning point clouds","González, Elena (University of Vigo); Balado Frías, J. (TU Delft GIS Technologie; University of Vigo); Arias, Pedro (University of Vigo); Lorenzo, Henrique (University of Vigo)","","2022","The enrichment of the point clouds with colour images improves the visualisation of the data as well as the segmentation and recognition processes. Coloured point clouds are becoming increasingly common, however, the colour they display is not always as expected. Errors in the colouring of point clouds acquired with Mobile Laser Scanning are due to perspective in the camera image, different resolution or poor calibration between the LiDAR sensor and the image sensor. The consequences of these errors are noticeable in elements captured in images, but not in point clouds, such as the sky. This paper focuses on the correction of the sky-coloured points, without resorting to the images that were initially used to colour the whole point cloud. The proposed method consists of three stages. First the region of interest where the erroneously coloured points are accumulated, is selected. Second, the sky-coloured points are detected by calculating the colour distance in the Lab colour space to a sample of the sky-colour. And third, the colour of the sky-coloured detected points is restored from the colour of the nearby points. The method is tested in ten real case studies with their corresponding point clouds from urban and rural areas. In two case studies, sky-coloured points were assigned manually and the remaining eight case studies, the sky-coloured points are derived from the acquisition errors. The algorithm for sky-coloured points detection obtained an average F1-score of 94.7%. The results show a correct reassignment of colour, texture, and patterns, while improving the point cloud visualisation.","Coloured point cloud; Image processing; Lab colour space; LiDAR; Mobile Mapping Systems; Point cloud processing","en","journal article","","","","","","","","","","","GIS Technologie","","",""
"uuid:9f68ba7d-e291-43ac-b6e1-7a1d7c429074","http://resolver.tudelft.nl/uuid:9f68ba7d-e291-43ac-b6e1-7a1d7c429074","Wind load estimation and virtual sensing in long-span suspension bridges using physics-informed Gaussian process latent force models","Petersen, W. (Norwegian University of Science and Technology (NTNU)); Øiseth, O. (Norwegian University of Science and Technology (NTNU)); Lourens, E. (TU Delft Dynamics of Structures; TU Delft Offshore Engineering)","","2022","Wind loading is an essential aspect in the design and assessment of long-span bridges, but it is often not well-known and cannot be measured directly. Most structural health monitoring systems can easily measure structural responses at discrete locations using accelerometers. This data can be combined with reduced-order modal models in Kalman filter-based algorithms for an inverse estimation of wind loads and system states. As a further development, this work investigates the incorporation of Gaussian process latent force models (GP-LFMs), which can characterize the evolution of the wind loading. The Hardanger Bridge, a 1310 m long suspension bridge instrumented with a monitoring system for wind and vibrations, is used as a case study. It is shown how the LFMs can be enriched with physical information about the stochastic wind loads using monitoring anemometer data and aerodynamic coefficients from wind tunnel tests. It is found that the estimates of the modal wind loads and modal states obtained from a Kalman filter and Rauch–Tung–Striebel smoother are stable for acceleration output only, thus avoiding the accumulation of errors. The proposed approach demonstrates how physical or environmental data can be injected as valuable information for global monitoring strategies and virtual sensing in bridges.","Force identification; Gaussian process; Latent force model; Response prediction; Structural monitoring; Suspension bridge; Virtual sensing; Wind engineering","en","journal article","","","","","","","","","","","Dynamics of Structures","","",""
"uuid:c529e0e2-c3ce-4dbc-ade4-c04f55feb69b","http://resolver.tudelft.nl/uuid:c529e0e2-c3ce-4dbc-ade4-c04f55feb69b","Combined Detection of Surface Changes and Deformation Anomalies Using Amplitude-Augmented Recursive InSAR Time Series","Hu, F. (Fudan University); van Leijen, F.J. (TU Delft Mathematical Geodesy and Positioning); Chang, L. (University of Twente); Wu, Jicang (Tongji University); Hanssen, R.F. (TU Delft Mathematical Geodesy and Positioning)","","2022","Synthetic aperture radar (SAR) missions with short repeat times enable opportunities for near real-time deformation monitoring. Traditional multitemporal interferometric SAR (MT-InSAR) is able to monitor long-term and periodic deformation with high precision by time-series analysis. However, as time series lengthen, it is time-consuming to update the current results by reprocessing the whole dataset. Additionally, the number of coherent scatterers varies over time due to disappearing and emerging scatterers due to inevitable changes in surface scattering, and potential deformation anomalies require changes in the prevailing deformation model. Here, we propose a novel method to analyze InSAR time series recursively and detect both significant changes in scattering as well as deformation anomalies based on the new acquisitions. Sequential change detection is developed to identify temporary coherent scatterers (TCSs) using amplitude time series. Based on the predicted phase residuals, scatterers with abnormal deformation displacements are identified by a generalized ratio test, while the parameters of stable scatterers are updated using Kalman filtering. The quality of the anomaly detection is assessed based on the detectability power and the minimum detectable deformation. This facilitates (near) real-time data processing and decreases the false alarm likelihood. Experimental results show that the technique can be used for the real-time evaluation of deformation risks.","Anomaly detection; change detection; multitemporal InSAR; recursive process","en","journal article","","","","","","","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:44f714cb-7fb4-4b71-b10e-ed1ba97b5edc","http://resolver.tudelft.nl/uuid:44f714cb-7fb4-4b71-b10e-ed1ba97b5edc","Understanding the complex geomorphology of a deep sea area affected by continental tectonic indentation: The case of the Gulf of Vera (Western Mediterranean)","Ercilla, Gemma (Institut de Ciències Del Mar, CSIC); Galindo-Zaldívar, Jesús (Universidad de Granada; Instituto Andaluz de Ciencias de la Tierra, Granada); Estrada, Ferran (Institut de Ciències Del Mar, CSIC); Valencia, Javier (Lyra, Gazteiz); Juan, Carmen (Centro Oceanográfico de Málaga); Casas, David (Institut de Ciències Del Mar, CSIC); Alonso, Belén (Institut de Ciències Del Mar, CSIC); Comas, Mª Carmen (Instituto Andaluz de Ciencias de la Tierra, Granada); Azpiroz Zabala, M. (TU Delft Applied Geology)","","2022","We present a multidisciplinary study of morphology, stratigraphy, sedimentology, tectonic structure, and physical oceanography to report that the complex geomorphology of the Palomares continental margin and adjacent Algerian abyssal plain (i.e., Gulf of Vera, Western Mediterranean), is the result of the sedimentary response to the Aguilas Arc continental tectonic indentation in the Eurasian–Africa plate collision. The indentation is imprinted on the basement of the margin with elongated metamorphic antiforms that are pierced by igneous bodies, and synforms that accommodate the deformation and create a complex physiography. The basement is partially covered by Upper Miocene deposits sealed by the regional Messinian Erosive Surface characterized by palaeocanyons that carve the modern margin. These deposits and outcropping basement highs are then covered and shaped by Plio-Quaternary contourites formed under the action of the Light Intermediate and Dense Deep Mediterranean bottom currents. Even though bottom currents are responsible for the primary sedimentation that shapes the margin, 97% of this region's seafloor is affected by mass-movements that modified contourite sediments by eroding, deforming, faulting, sliding, and depositing sediments. Mass-movement processes have resulted in the formation of recurrent mass-flow deposits, an enlargement of the submarine canyons and gully incisions, and basin-scale gravitational slides spreading above the Messinian Salinity Crisis salt layer. The Polopo, Aguilas and Gata slides are characterized by an extensional upslope domain that shapes the continental margin, and by a downslope contractional domain that shapes the abyssal plain with diapirs piercing (hemi)pelagites/sheet-like turbidites creating a seafloor dotted by numerous crests. The mass movements were mostly triggered by the interplay of the continental tectonic indentation of the Aguilas Arc with sedimentological factors over time. The indentation, which involves the progressively southeastward tectonic tilting of the whole land-sea region, likely generated a quasi-continuous oversteepening of the entire margin, thus reducing the stability of the contourites. In addition, tectonic tilting and subsidence of the abyssal plain favoured the flow of the underlying Messinian Salinity Crisis salt layer, contributing to the gravitational instability of the overlying sediments over large areas of the margin and abyssal plain.","Continental margin; Contourites; Geomorphic processes; Mass movements; Tectonic indentation; Western Mediterranean","en","journal article","","","","","","","","","","","Applied Geology","","",""
"uuid:b8505ad5-6693-4bbe-a857-f711176d2107","http://resolver.tudelft.nl/uuid:b8505ad5-6693-4bbe-a857-f711176d2107","Decarbonizing ethanol production via gas fermentation: Impact of the CO/H2/CO2 mix source on greenhouse gas emissions and production costs","Almeida Benalcazar, E.F. (TU Delft BT/Biotechnology and Society; University of Campinas); Noorman, H.J. (TU Delft BT/Bioprocess Engineering; DSM); Maciel Filho, Rubens (University of Campinas); Posada Duque, J.A. (TU Delft BT/Biotechnology and Society)","","2022","This study explores key success factors for ethanol production via fermentation of gas streams, by assessing the effects of eight process variables driving the fermentation performance on the production costs and greenhouse gas emissions. Three fermentation feedstocks are assessed: off-gases from the steel industry, lignocellulosic biomass-derived syngas and a mixture of H2 and CO2. The analysis is done through a sequence of (i) sensitivity analyses based on stochastic simulations and (ii) multi-objective optimizations. In economic terms, the use of steel off-gas leads to the best performance and the highest robustness to low mass transfer coefficients, low microbial tolerance to ethanol, acetic-acid co-production and to dilution of the gas feed with CO2, due to the relatively high temperature at which the gas feedstock is available. The ethanol produced from the three feedstocks lead to lower greenhouse gas emissions than fossil-based gasoline and compete with first and second generation ethanol.","Bubble column; Ethanol; Gas fermentation; Process optimization; Sensitivity analysis; Syngas","en","journal article","","","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:dff68b4d-92c1-47c8-a27d-a053a8b958ca","http://resolver.tudelft.nl/uuid:dff68b4d-92c1-47c8-a27d-a053a8b958ca","Nociceptive Intra-epidermal Electric Stimulation Evokes Steady-State Responses in the Secondary Somatosensory Cortex","van den Berg, Boudewijn (University of Twente); Manoochehri, M. (TU Delft Biomechatronics & Human-Machine Control); Schouten, A.C. (TU Delft Biomechatronics & Human-Machine Control; Northwestern University Feinberg School of Medicine; University of Twente); van der Helm, F.C.T. (TU Delft Biomechatronics & Human-Machine Control; Northwestern University Feinberg School of Medicine); Buitenweg, Jan R. (University of Twente)","","2022","Recent studies have established the presence of nociceptive steady-state evoked potentials (SSEPs), generated in response to thermal or intra-epidermal electric stimuli. This study explores cortical sources and generation mechanisms of nociceptive SSEPs in response to intra-epidermal electric stimuli. Our method was to stimulate healthy volunteers (n = 22, all men) with 100 intra-epidermal pulse sequences. Each sequence had a duration of 8.5 s, and consisted of pulses with a pulse rate between 20 and 200 Hz, which was frequency modulated with a multisine waveform of 3, 7 and 13 Hz (n = 10, 1 excluded) or 3 and 7 Hz (n = 12, 1 excluded). As a result, evoked potentials in response to stimulation onset and contralateral SSEPs at 3 and 7 Hz were observed. The SSEPs at 3 and 7 Hz had an average time delay of 137 ms and 143 ms respectively. The evoked potential in response to stimulation onset had a contralateral minimum (N1) at 115 ms and a central maximum (P2) at 300 ms. Sources for the multisine SSEP at 3 and 7 Hz were found through beamforming near the primary and secondary somatosensory cortex. Sources for the N1 were found near the primary and secondary somatosensory cortex. Sources for the N2-P2 were found near the supplementary motor area. Harmonic and intermodulation frequencies in the SSEP power spectrum remained below a detectable level and no evidence for nonlinearity of nociceptive processing, i.e. processing of peripheral firing rate into cortical evoked potentials, was found.","Beamforming; Evoked potentials; Intra-epidermal stimulation; Nociceptive processing; Source localization; Steady-state evoked potentials","en","journal article","","","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:c376a29a-5e29-464c-8fec-addc1a1c05d2","http://resolver.tudelft.nl/uuid:c376a29a-5e29-464c-8fec-addc1a1c05d2","Global synchromodal shipment matching problem with dynamic and stochastic travel times: a reinforcement learning approach","Guo, W. (University of Quebec); Atasoy, B. (TU Delft Transport Engineering and Logistics); Negenborn, R.R. (TU Delft Transport Engineering and Logistics)","","2022","Global synchromodal transportation involves the movement of container shipments between inland terminals located in different continents using ships, barges, trains, trucks, or any combination among them through integrated planning at a network level. One of the challenges faced by global operators is the matching of accepted shipments with services in an integrated global synchromodal transport network with dynamic and stochastic travel times. The travel times of services are unknown and revealed dynamically during the execution of transport plans, but the stochastic information of travel times are assumed available. Matching decisions can be updated before shipments arrive at their destination terminals. The objective of the problem is to maximize the total profits that are expressed in terms of a combination of revenues, travel costs, transfer costs, storage costs, delay costs, and carbon tax over a given planning horizon. We propose a sequential decision process model to describe the problem. In order to address the curse of dimensionality, we develop a reinforcement learning approach to learn the value of matching a shipment with a service through simulations. Specifically, we adopt the Q-learning algorithm to update value function estimations and use the ϵ-greedy strategy to balance exploitation and exploration. Online decisions are created based on the estimated value functions. The performance of the reinforcement learning approach is evaluated in comparison to a myopic approach that does not consider uncertainties and a stochastic approach that sets chance constraints on feasible transshipment under a rolling horizon framework.","Dynamic and stochastic travel times; Global synchromodal shipment matching; Q-learning; Reinforcement learning; Sequential decision process","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-07-21","","","Transport Engineering and Logistics","","",""
"uuid:c9a2de2b-1897-4daf-91ff-7c0abe99757a","http://resolver.tudelft.nl/uuid:c9a2de2b-1897-4daf-91ff-7c0abe99757a","Fault detection and diagnosis to enhance safety in digitalized process system","Kopbayev, Alibek (Memorial University of Newfoundland); Khan, Faisal (Texas A and M University); Yang, M. (TU Delft Safety and Security Science); Halim, S. Zohra (Texas A and M University)","","2022","The increased complexity of digitalized process systems requires advanced tools to detect and diagnose faults early to maintain safe operations. This study proposed a hybrid model that consists of Kernel Principal Component Analysis (kPCA) and DNNs that can be applied to detect and diagnose faults in various processes. The complex data is processed by kPCA to reduce its dimensionality; then, simplified data is used for two separate DNNs for training (detection and diagnosis). The relative performance of the hybrid model is compared with conventional methods. Tennessee Eastman Process was used to confirm the efficacy of the model. The results show the reduction of input dimensionality increases classification accuracy. In addition, splitting detection and diagnosis into two DNNs results in reduced training times and increased classification accuracy. The proposed hybrid model serves as an important tool to detect the fault and take early corrective actions, thus enhancing process safety.","Deep Neural Networks; Fault detection and diagnosis; Hybrid model; kPCA; Process system safety","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-05-22","","","Safety and Security Science","","",""
"uuid:0b802b47-b578-4302-9796-f847091e0e64","http://resolver.tudelft.nl/uuid:0b802b47-b578-4302-9796-f847091e0e64","Analysis of a tripartite entanglement distribution switch","Nain, Philippe (University Côte d'Azur); Vardoyan, G.S. (TU Delft QID/Wehner Group; TU Delft QuTech Advanced Research Centre; Kavli institute of nanoscience Delft); Guha, Saikat (University of Arizona); Towsley, Don (University of Massachusetts Amherst)","","2022","We study a quantum switch that distributes tripartite entangled states to sets of users. The entanglement switching process requires two steps: First, each user attempts to generate bipartite entanglement between itself and the switch, and second, the switch performs local operations and a measurement to create multipartite entanglement for a set of three users. In this work, we study a simple variant of this system, wherein the switch has infinite memory and the links that connect the users to the switch are identical. This problem formulation is of interest to several distributed quantum applications, while the technical aspects of this work result in new contributions within queueing theory. The state of the system is modeled as continuous-time Markov chain (CTMC), and performance metrics of interest (probability of an empty system, switch capacity, expectation, and variance of the number of qubit-pairs stored) are computed via the solution of a two-dimensional functional equation obtained by reducing it to a boundary value problem on a closed curve. This work is a follow-up of Nain et al. (Proc ACM Measure Anal Comput Syst(POMACS) 4, 2020) where a switch distributing entangled multipartite states to sets of users was studied, but only the switch capacity and the expected number of stored qubits were derived.","Boundary value problem; Markov process; Quantum switch; Queueing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","QID/Wehner Group","","",""
"uuid:1e8e94cd-0b6d-4e42-bf56-fc5b83265bd0","http://resolver.tudelft.nl/uuid:1e8e94cd-0b6d-4e42-bf56-fc5b83265bd0","Modeling the Morphodynamic Response of Estuarine Intertidal Shoals to Sea-Level Rise","Elmilady, H.M.S.M.A. (TU Delft Coastal Engineering; Deltares; IHE Delft Institute for Water Education); van der Wegen, M. (Deltares; IHE Delft Institute for Water Education); Roelvink, D. (TU Delft Coastal Engineering; Deltares; IHE Delft Institute for Water Education); van der Spek, A. (Deltares; Universiteit Utrecht)","","2022","Intertidal shoals are key features of estuarine environments worldwide. Climate change poses questions regarding the sustainability of intertidal areas under sea-level rise (SLR). Our work investigates the SLR impact on the long-term morphological evolution of unvegetated intertidal sandy shoals in a constrained channel-shoal system. Utilizing a process-based model (Delft3D), we schematize a short tidal system in a rectangular (2.5 × 20 km) basin with a high-resolution grid. An initial, mildly sloping, bathymetry is subjected to constant semidiurnal tidal forcing, sediment supply, and small wind-generated waves modeled by SWAN. A positive morphodynamic feedback between hydrodynamics, sediment transport, and morphology causes the emergence of large-scale channel-shoal patterns. Over centuries, tide-residual sediment transport gradually decreases leading to a state of low morphological activity balanced by tides, waves, and sediment supply. Tidal currents are the main driver of the SLR morphodynamic adaptation. Wave action leads to wider and lower shoals but does not fundamentally change the long-term morphological evolution. SLR causes increased flood dominance which triggers sediment import into the system. Shoals accrete in response to SLR with a lag that increases as SLR accelerates, eventually causing intertidal shoals to drown. Seaward shoals near the open boundary sediment source have higher accretion rates compared to landward shoals. Similarly, on a shoal-scale, the highest accretion rates occur at the shoal edges bounding the sediment suppling channels. A larger sediment supply enhances the SLR adaptation. Waves help distribute sediment supplied from channels across shoals. Adding mud fractions leads to faster, more uniform, accretion and muddier shoals under SLR.","Delft3D; Intertidal shoals; Long-term morphological evolution; Process-based modeling; Sea-level rise; Waves","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:5ece17ef-902f-40af-af9a-bd22a89da055","http://resolver.tudelft.nl/uuid:5ece17ef-902f-40af-af9a-bd22a89da055","Chemical process safety education in China: An overview and the way forward","Motalifu, Mailidan (China University of Petroleum (East China)); Tian, Yue (Sinochem); Liu, Yi (China University of Petroleum (East China)); Zhao, Dongfeng (China University of Petroleum (East China)); Bai, Mingqi (China University of Petroleum (East China)); Kan, Yufeng (Wanhua Chemical Group); Qi, Meng (Yonsei University); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); Roy, Nitin (California State University)","","2022","The chemical process industry (CPI) in China is developing rapidly with installations becoming more complicated and integrated to meet people's rising demands for chemical-related products. However, the fast-growing CPI has caused catastrophic consequences and bad social influence due to accidents occurred in the last decades, which has threatened its sustainable development. As one of the solutions, the Chinese government is promoting chemical process safety education to train interdisciplinary graduates who understand both chemical process and loss prevention, who are skilled in technology, and how to manage risk. In this paper, we reviewed the development of chemical process safety education in China by researching syllabuses of accredited undergraduate Chemical Engineering and Safety Engineering majors in higher education institutions, discussed the associated shortcomings by analyzing the current discipline construction of the newly established major Chemical Safety Engineering, including education methodologies, resources, faculties, curriculum provision, and professional accreditation. Based on the analysis results, suggestions were provided to encourage institutions to strengthen chemical process safety education, thereby inherently reducing human errors and consequently improving the safety of the entire CPI.","Chemical engineering; Chemical process industry; Engineering education accreditation; Interdisciplinary graduates","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Safety and Security Science","","",""
"uuid:394792c1-9edb-422d-a96b-3b695c01d33f","http://resolver.tudelft.nl/uuid:394792c1-9edb-422d-a96b-3b695c01d33f","Safety barriers in the chemical process industries: A state-of-the-art review on their classification, assessment, and management","Yuan, S. (TU Delft Safety and Security Science); Yang, M. (TU Delft Safety and Security Science); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven); Chen, C. (TU Delft Safety and Security Science); Wu, Jiansong (China University of Mining and Technology (Beijing))","","2022","Barriers are used in various forms to assure the safety of chemical plants. A deep understanding of the literature related to safety barriers is essential to tackle the challenges in improving their design and management. This paper first provides an overview of the history of the development of the safety barrier concept. Subsequently, this paper elaborates a systematic review of the definition, classification, evaluation, performance assessment, and management of safety barriers in the chemical process industries. Based on the literature review, this study proposes a practical classification of safety barriers benefiting the identification of performance indicators and the collection of indicator-related data for safety barriers. The safety barrier functions are extended and illustrated by involving the resilience concept. Performance assessment criteria are proposed corresponding to the adaptability and recoverability of the safety barriers. Finally, the management of safety barriers is discussed. The roadmap for future studies to develop integrated management of safety and security barriers to ensure the resilience of chemical plants is suggested.","Barrier management; Barrier performance assessment; Process industry; Resilience; Safety barrier","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:a11cce37-7e50-4edc-bcce-e8dfcea4d35a","http://resolver.tudelft.nl/uuid:a11cce37-7e50-4edc-bcce-e8dfcea4d35a","3D Marchenko applications: implementation and examples","Brackenhoff, J.A. (TU Delft Applied Geophysics and Petrophysics; ETH Zürich); Thorbecke, J.W. (TU Delft Applied Geophysics and Petrophysics); Meles, G.A. (TU Delft Applied Geophysics and Petrophysics; University of Lausanne); Koehne, Victor (SENAI CIMATEC); Barrera, Diego (SENAI CIMATEC); Wapenaar, C.P.A. (TU Delft Applied Geophysics and Petrophysics; TU Delft ImPhys/Medical Imaging)","","2022","We implement the 3D Marchenko equations to retrieve responses to virtual sources inside the subsurface. For this, we require reflection data at the surface of the Earth that contain no free-surface multiples and are densely sampled in space. The required 3D reflection data volume is very large and solving the Marchenko equations requires a significant amount of computational cost. To limit the cost, we apply floating point compression to the reflection data to reduce their volume and the loading time from disk. We apply the Marchenko implementation to numerical reflection data to retrieve accurate Green's functions inside the medium and use these reflection data to apply imaging. This requires the simulation of many virtual source points, which we circumvent using virtual plane-wave sources instead of virtual point sources. Through this method, we retrieve the angle-dependent response of a source from a depth level rather than of a point. We use these responses to obtain angle-dependent structural images of the subsurface, free of contamination from wrongly imaged internal multiples. These images have less lateral resolution than those obtained using virtual point sources, but are more efficiently retrieved.","Numerical study; Seismics; Signal processing","en","journal article","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:0e2facef-0d4f-4a07-bf19-6f9887ffb9cd","http://resolver.tudelft.nl/uuid:0e2facef-0d4f-4a07-bf19-6f9887ffb9cd","Chinese international process safety research: Collaborations, research trends, and intellectual basis","Li, Jie (Chinese Academy of Sciences; Beijing Institute of Technology); Goerlandt, F.M.B. (TU Delft Ship Design, Production and Operations; Dalhousie University); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven); Feng, Changgen (Beijing Institute of Technology); Liu, Yi (China University of Petroleum (East China))","","2022","This article presents a bibliometric analysis and mapping of the Chinese process safety research, focusing on the contributions made in core process safety journals and on the influences of international collaborations and knowledge sources on the developments of this research domain. Collaboration networks, term co-occurrence networks, and co-citation network were analyzed to identify trends, patterns, and the knowledge distribution of the Chinese research on process safety. Work to data has been clustered mainly on safety of chemical processes, fire and explosion, and risk management and accidents. Chinese research contributions are concentrated in only few journals, while the corresponding intellectual base draws on the wider literature focused on understanding and modeling phenomena, and on the broader risk research literature, although to a lesser extent. While various foreign authors are highly cited by Chinese authors, only very few direct collaborations with international scholars are identified. The results are used as a basis for a discussion on future research directions and developments for the community. Increased focus on uncertainty treatment and handling of black swan events, risk evaluation and economic aspects of safety decisions, interorganizational risk management, road and maritime transport of hazardous substances, risk perception and communication, and integrated safety and security assessment, are highlighted as fruitful directions for future scholarship. It is hoped that the insights obtained from this work can facilitate new and consolidated collaborations, as well as further invigorate the Chinese process safety domain, ultimately contributing to improved safety performance of process industries in China and elsewhere.","Bibliometric mapping; Bibliometrics; Chinese process safety research; Process safety; Scientific collaboration; Scientometrics","en","journal article","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:ddbf62f6-31b8-4e5e-95e9-370dc0591559","http://resolver.tudelft.nl/uuid:ddbf62f6-31b8-4e5e-95e9-370dc0591559","Reducing the environmental impacts of the production of melamine etherified resin fibre","Vujanović, Annamaria (University of Maribor); Puhar, Jan (University of Maribor); Krajnc, Damjan (University of Maribor); Awad, P.W.A.A. (TU Delft ChemE/Delft Ingenious Design); Čuček, Lidija (University of Maribor)","","2022","Conventional plastic products present a serious burden to the environment, especially during their end-of-life phase. To tackle the rapid growth in plastic production, use and pollution, it is desirable to produce plastic materials more sustainably. Amongst these plastic materials which could be produced sustainably are Melamine Etherified Resin (MER) fibres, which have a wide range of potential uses, such as in mobility, filtrations, for thermal protective clothing and other applications. This paper explores the potential for sustainable MER fibre production, where all the required feedstocks could be from either renewable or waste origin. To investigate more sustainable pathways, the conventional process is compared to two alternatives processes which utilize waste CO2 and wood-based methanol for formalin production. A comparative environmental impact assessment is conducted, where selected environmental footprints, potential environmental impacts and eco-costs are analysed based on 1 kg of produced MER fibres. Results show that greenhouse gas (GHG) footprint could be reduced by over 68% and human toxicity potential by over 75%, while eco-costs could be reduced by up to 44%. Moreover, the results present the first step towards producing MER fibres in a sustainable way, contributing to the circular economy.","Circular economy; Environmental analysis; Green chemicals; Life cycle assessment (LCA); Melamine etherified resin (MER) fibres; Process simulation","en","journal article","","","","","","","","","","","ChemE/Delft Ingenious Design","","",""
"uuid:acee27cc-05f8-4ef6-aa0b-a9468ce5c634","http://resolver.tudelft.nl/uuid:acee27cc-05f8-4ef6-aa0b-a9468ce5c634","A new class of control structures for heterogeneous reactive distillation processes","Moraru, Mihai Daniel (Hexion, Pernis); Patrascu, Iulian (Politehnica University of Bucharest); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering; The University of Manchester); Bildea, Costin Sorin (Politehnica University of Bucharest)","","2022","There are only a handful of process control structures applied to the neat operation of both homogeneous and heterogeneous reactive distillation, for two-reactants / two-products one-reaction systems. All of these control structures employ inferential temperature control (or concentration analyzers) at some location in the column to balance the reaction stoichiometry. This original study proposes a new class of control structures applicable to heterogeneous reactive distillation. The novel idea, common to all control structures, is based on monitoring the inventory of the reactant involved in the heterogeneous azeotrope. The organic reflux (or the organic reflux / aqueous distillate ratio) is used to detect the excess or deficiency of the reactant, based on which the fresh feed rate is adjusted such that the reaction stoichiometry is balanced. This control philosophy is simple and easy to implement in different ways as illustrated by several case studies. The performance of the proposed control structures depends on the system studied. For some systems, the performance is better, as good or nearly as good as that of the literature control structures. But for other systems, the performance is poor or the structure even fails to control the process, due to the insufficient feedback from inventory measurements.","Case studies; Esterification; Process control; Reactive distillation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","ChemE/Product and Process Engineering","","",""
"uuid:01dc5426-ec44-4875-8279-2b6ea6990571","http://resolver.tudelft.nl/uuid:01dc5426-ec44-4875-8279-2b6ea6990571","A life cycle analysis of novel lightweight composite processes: Reducing the environmental footprint of automotive structures","Wegmann, Stephanie (University of Applied Sciences and Arts Northwestern Switzerland); Rytka, Christian (University of Applied Sciences and Arts Northwestern Switzerland); Diaz-Rodenas, Mariona (University of Applied Sciences and Arts Northwestern Switzerland); Werlen, Vincent (University of Applied Sciences and Arts Northwestern Switzerland; Swiss Federal Institute of Technology); Schneeberger, Christoph (ETH Zürich); Ermanni, Paolo (ETH Zürich); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies; Swiss Federal Institute of Technology); Gomez, Colin (Swiss Federal Institute of Technology); Michaud, Véronique (Swiss Federal Institute of Technology)","","2022","In this study, three novel thermoplastic impregnation processes were analyzed towards automotive applications. The first process is thermoplastic compression resin transfer molding in which a glass fiber mat is impregnated in through thickness by a thermoplastic polymer. The second process is a melt-thermoplastic Resin Transfer Molding (RTM) process in which the glass fibers are impregnated in plane with the help of a spacer. The third process, stamp forming of hybrid bicomponent fibers, coats the fibers individually during the glass fiber production. The coated fibers are used to produce a fabric, which is then further processed by stamp forming. These three processes were compared in a life cycle analysis (LCA) against conventional resin compression resin transfer molding with either glass or carbon fibers and metal processes with either steel or aluminum that can be new, partly or fully recycled using the case study of the production, life and disposal of a car bonnet. The presented LCA includes the main phases of the process: extraction and preparation of the raw materials, production and preparation of the mold, process, and energy losses. To include the life of the analyzed bonnet, the amount of diesel that is used to drive the weight of the bonnet for 300′000 km is calculated. In this LCA, the disposal of the bonnet is integrated by analyzing the used energy for the recycling and the incineration. The results show the potential of the developed thermoplastic impregnation processes producing automobile parts, as the used energy producing a thermoplastic bonnet is in the same range as the steel production.","Composite polymer processing; Energy saving; LCA; LCI; Lightweight construction; Mobility; Thermoplastic impregnation processes","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:44e77b2e-419c-4329-adcb-c9c20a89d7d3","http://resolver.tudelft.nl/uuid:44e77b2e-419c-4329-adcb-c9c20a89d7d3","Learning to understand: disentangling the outcomes of stakeholder participation in climate change governance","Teodoro Morales, J.D. (TU Delft Transport and Logistics; Rijksuniversiteit Groningen); Prell, Christina (Rijksuniversiteit Groningen)","","2022","Stakeholder participation is increasingly seen as beneficial for short and long term responses to climate change risks. Past research highlights the role social networks play as both a key outcome of participation, as well as an important step towards other environmental governance goals. This paper focuses on the social relation of mutual understanding, which is often discussed in the environmental governance literature, but has yet to be studied as an empirical social network in its own right. Our paper builds and tests a conceptual framework linking participation to mutual understanding and social learning. We analyze three waves of network and perceptions data gathered on stakeholders participating in the Integrated Coastal Resiliency Assessment (ICRA) project, a 2.5 year-long project aimed at developing a collaborative research assessment on the vulnerabilities to climate change experienced by an island community located in the Chesapeake Bay, USA. Our findings suggest that participation (measured as co-attendance in project events) leads to the formation of mutual understanding ties among stakeholders, but these ties do not necessarily lead to more similarity in stakeholders’ perceptions on climate change. We reflect on these findings, and the project more broadly, noting that our study lends support to scholars arguing that feelings of mutual understanding are potentially more important for certain forms of collective action, as opposed to whether or not stakeholders increase their shared beliefs or perceptions about the environmental problem in question.","Climate change; Co-evolutionary networks; Cognition; Cognitive Networks; Environmental governance; Mutual understandingSocial influence; Participatory processes; Perception; SAOMs; Social learning; Stakeholder networks; Symmetric networks","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:bc5eaf6c-144a-4750-a08c-d063a2a8ca38","http://resolver.tudelft.nl/uuid:bc5eaf6c-144a-4750-a08c-d063a2a8ca38","A Fast Electrical Resistivity-Based Algorithm to Measure and Visualize Two-Phase Swirling Flows","Sattar, Muhammad Awais (Lodz University of Technology); Martinez Garcia, M. (TU Delft ChemE/Transport Phenomena); Portela, L. (TU Delft ChemE/Transport Phenomena); Babout, Laurent (Lodz University of Technology)","","2022","Electrical resistance tomography (ERT) has been used in the literature to monitor the gas–liquid separation. However, the image reconstruction algorithms used in the studies take a considerable amount of time to generate the tomograms, which is far above the time scales of the flow inside the inline separator and, as a consequence, the technique is not fast enough to capture all the relevant dynamics of the process, vital for control applications. This article proposes a new strategy based on the physics behind the measurement and simple logics to monitor the separation with a high temporal resolution by minimizing both the amount of data and the calculations required to reconstruct one frame of the flow. To demonstrate its potential, the electronics of an ERT system are used together with a high-speed camera to measure the flow inside an inline swirl separator. For the 16-electrode system used in this study, only 12 measurements are required to reconstruct the whole flow distribution with the proposed algorithm, 10× less than the minimum number of measurements of ERT (120). In terms of computational effort, the technique was shown to be 1000× faster than solving the inverse problem non-iteratively via the Gauss–Newton approach, one of the computationally cheapest techniques available. Therefore, this novel algorithm has the potential to achieve measurement speeds in the order of 104 times the ERT speed in the context of inline swirl separation, pointing to flow measurements at around 10kHz while keeping the average estimation error below 6 mm in the worst-case scenario.","Electrical resistance tomography (ERT); Geometrical parameter extraction; Inline swirl separator; Raw data processing","en","journal article","","","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:45e02e82-d0c6-427a-acfc-a8bb76663b00","http://resolver.tudelft.nl/uuid:45e02e82-d0c6-427a-acfc-a8bb76663b00","Quantifying changes in societal optimism from online sentiment","Isch, Calvin (Indiana University Bloomington); ten Thij, M.C. (TU Delft Applied Probability; Indiana University Bloomington; Universiteit Maastricht); Todd, Peter M. (Indiana University Bloomington); Bollen, Johan (Indiana University Bloomington)","","2022","Individuals can hold contrasting views about distinct times: for example, dread over tomorrow’s appointment and excitement about next summer’s vacation. Yet, psychological measures of optimism often assess only one time point or ask participants to generalize about their future. Here, we address these limitations by developing the optimism curve, a measure of societal optimism that compares positivity toward different future times that was inspired by the Treasury bond yield curve. By performing sentiment analysis on over 3.5 million tweets that reference 23 future time points (2 days to 30 years), we measured how positivity differs across short-, medium-, and longer-term future references. We found a consistent negative association between positivity and the distance into the future referenced: From August 2017 to February 2020, the long-term future was discussed less positively than the short-term future. During the COVID-19 pandemic, this relationship inverted, indicating declining near-future- but stable distant-future-optimism. Our results demonstrate that individuals hold differentiated attitudes toward the near and distant future that shift in aggregate over time in response to external events. The optimism curve uniquely captures these shifting attitudes and may serve as a useful tool that can expand existing psychometric measures of optimism.","Computational science; Natural language processing; Optimism; Optimism curve; Sentiment analysis; Social Media; Societal mood; Societal optimism; Twitter; Yield curve inversion","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Applied Probability","","",""
"uuid:51063971-7e23-46a4-b6bf-86f08bcc74e7","http://resolver.tudelft.nl/uuid:51063971-7e23-46a4-b6bf-86f08bcc74e7","The effect of temperature and excitation energy of the high- and low-spin 4f→5d transitions on charging of traps in Lu2O3:Tb,M (M = Ti, Hf)","Kulesza, Dagmara (University of Wroclaw); bos, A.J.J. (TU Delft RST/Fundamental Aspects of Materials and Energy); Zych, Eugeniusz (University of Wroclaw)","","2022","This work presents a fresh insight into the excited charges trapping in the Lu2O3:Tb,M (M= Ti, Hf) ceramics and their characteristics as storage and/or persistent luminescence phosphors. The results were obtained by applying an exceedingly versatile set of experiments based on thermoluminescence and thermoluminescence excitation spectroscopy and exposed a dual-nature of these materials. In the contrary to the previous research, here we found that at least some of these materials can generate efficient persistent luminescence due to the presence of shallow traps which can be charged only upon specific irradiation conditions – by the spin-forbidden 4f→5d transition of Tb3+ around 360 nm and, possibly, the 7F6→5D3 intra-configurational transition of the activator at just slightly longer wavelengths. Besides that, changing the sample charging temperature the efficiency of filling the traps – both deep and shallow – with the 360 nm radiation varied greatly and exposed a very broad distribution of trap energies. Charging with 360 nm radiation at room temperature fills only the shallow traps giving, never reported in Lu2O3:Tb,Ti and Lu2O3:Tb,Hf, intense persistent luminescence, while at higher temperatures the deep traps are filled. At any temperature, radiation of wavelengths < 320 nm fills almost exclusively deep traps responsible for TL at high temperatures, 230 °C in Lu2O3:Tb,Hf and 355 °C in Lu2O3:Tb,Ti.","Defects; Irradiation effect; Luminescence; Thermally activated processes","en","journal article","","","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:10d1ebfb-b8e9-4158-bb26-ddaacbea9040","http://resolver.tudelft.nl/uuid:10d1ebfb-b8e9-4158-bb26-ddaacbea9040","Elucidating the effect of cohesive zone length in fracture simulations of particulate composites","Ponnusami, Sathiskumar Anusuya (City University London); Krishnasamy, J. (TU Delft Aerospace Structures & Computational Mechanics); Turteltaub, S.R. (TU Delft Aerospace Structures & Computational Mechanics); van der Zwaag, S. (TU Delft Novel Aerospace Materials)","","2022","The influence of the cohesive zone length on the crack driving force is quantified and analyzed in a representative system of particles dispersed in a matrix of a composite material. For heterogeneous material systems, e.g. particulate composites, it is known that as a crack approaches the particles, the crack driving force may increase (shielding) or decrease (anti-shielding) depending on the relative stiffness of the particles. These results have been established in numerous studies using the classical linear elastic fracture mechanics approach (LEFM). The cohesive zone method (CZM) introduces a length scale parameter, referred to as the cohesive zone (or fracture process zone) length scale, into the formulation of fracture mechanics. It is generally established that fracture mechanics predictions using the CZM are similar to those obtained using LEFM in the limit case where the process zone is very small relative to a suitable characteristic dimension of the problem. However, the influence of the length scale parameter has not been clearly demonstrated for crack propagation in a heterogeneous material system, especially when the cohesive zone length is not negligible. By considering a simple crack-particle-matrix system, it is shown that, in addition to the elastic properties, the process zone length scale parameter exhibits a critical influence on the crack driving force. For this study, the concept of configurational forces is utilized and the eXtended Finite Element Method (XFEM) is employed as a tool to simulate crack propagation. Through numerical simulations, it is shown that (i) the magnitude of the driving force vector directly depends on the length scale parameter and (ii) the direction of the driving force is largely influenced by the presence of a cohesive zone. This, in turn, alters the crack trajectory in the particulate system if the criterion for the direction of crack propagation depends on the orientation of the driving force vector. Towards this end, two different criteria for direction of crack propagation, namely maximum principal stress and maximum energy dissipation, are compared in the presence of a cohesive zone and the results are reported. The study reveals the crucial influence of the inherent length scale associated with the cohesive zone method when applied to crack propagation in particulate composite systems and elucidates important differences when comparing predictions from distinct theories of fracture mechanics.","Cohesive zone fracture mechanics; Crack driving force; Crack-particle interaction; Fracture process zone; Length scale; Particulate composites","en","journal article","","","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:da70f65b-e47c-49ec-8603-b4f567ff08b1","http://resolver.tudelft.nl/uuid:da70f65b-e47c-49ec-8603-b4f567ff08b1","Gaussian process repetitive control: Beyond periodic internal models through kernels","Mooren, Noud (Eindhoven University of Technology); Witvoet, Gert (Eindhoven University of Technology; TNO); Oomen, T.A.E. (TU Delft Team Jan-Willem van Wingerden; Eindhoven University of Technology)","","2022","Repetitive control enables the exact compensation of periodic disturbances if the internal model is appropriately selected. The aim of this paper is to develop a novel synthesis technique for repetitive control (RC) based on a new more general internal model. By employing a Gaussian process internal model, asymptotic rejection is obtained for a wide range of disturbances through an appropriate selection of a kernel. The implementation is a simple linear time-invariant (LTI) filter that is automatically synthesized through this kernel. The result is a user-friendly design approach based on a limited number of intuitive design variables, such as smoothness and periodicity. The approach naturally extends to reject multi-period and non-periodic disturbances, exiting approaches are recovered as special cases, and a case study shows that it outperforms traditional RC in both convergence speed and steady-state error.","Disturbance rejection; Gaussian processes; Internal model control; Repetitive control","en","journal article","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:bde655ee-4d45-45e4-a976-4fc948f67c9a","http://resolver.tudelft.nl/uuid:bde655ee-4d45-45e4-a976-4fc948f67c9a","Resilience-based approach to maintenance asset and operational cost planning","Sun, Hao (China University of Petroleum (East China)); Yang, M. (TU Delft Safety and Security Science); Wang, Haiqing (China University of Petroleum (East China))","","2022","Reliability-based and risk-based methods for directing maintenance activities play a critical role in ensuring system safety and reducing unnecessary downtime. Those methods focus on preventive maintenance to avoid component failures and are applicable before unexpected disruptions occur. However, when disruptions are unavoidable, more attention should be paid to systems’ recovery from unwanted changes. As a remedy of preventive maintenance, improving system restoration capacity of resilience through optimizing the system's maintenance asset and operational cost is an efficient way to help system restore from disruption conditions within an optimal cost. In this paper, a resilience-based approach is proposed to optimize maintenance asset and operational cost. A novel resilience metric is developed and utilized to quantify system resilience under various restoration capacities. The minimal acceptable resilience level (MARL) and maximal acceptable restoration time (MART) are proposed to determine the optimal maintenance cost. The proposed approach is applied to the Chevron Richmond refinery crude unit and its upstream process. The results show that it can help practitioners identify the optimal cost to ensure a system is resilient to respond to uncertain disruptions and provide a dynamic resilience profile to support decision-making.","Cost optimization; Maintenance; Process systems; Resilience; Restoration","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:e642554a-a45c-4972-8dc8-1bb53b597f64","http://resolver.tudelft.nl/uuid:e642554a-a45c-4972-8dc8-1bb53b597f64","Reflectivity and emissivity analysis of thermoplastic CFRP for optimising Xenon heating and thermographic measurements","Meister, S. (TU Delft Structural Integrity & Composites; Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)); Kolbe, Andreas (Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)); Groves, R.M. (TU Delft Structural Integrity & Composites)","","2022","The demand for efficient composite production processes is growing as the proportion of composites in modern aircraft increases. Particularly, thermoplastic composites are interesting for sustainability and cost efficiency. They can be manufactured using deposition methods, which involve heating by radiation in the visible and near-infrared spectra. A Xenon flashlamp is a commonly used for manufacturing. In-line inspection can be performed using thermographic cameras which measure infrared radiation. For those, the composite's angle-dependent reflection and emission behaviour is interesting. Accordingly, the relationships between angle and temperature dependent visible/near-infrared reflectivity and thermal infrared emissivity is investigated and composite's conductivity properties are derived. The link between the material's optical and electromagnetic properties is estimated through the Brewster angle derived from Fresnel fitting, which allows the prediction of the directional electrical and thermal conductivity by non-contact measurement. The findings from this study will be valuable for users of Xenon heating and thermographic measurement systems.","Automated fibre placement (AFP); Electrical properties; Optical properties/techniques; Process monitoring","en","journal article","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:bfeaa9a7-39a1-4774-bf6e-cc9b191499ae","http://resolver.tudelft.nl/uuid:bfeaa9a7-39a1-4774-bf6e-cc9b191499ae","Fully integrated CO2 mitigation strategy for an existing refinery: A case study in Colombia","Yáñez, Édgar (Colombian Petroleum Institute); Meerman, Hans (Rijksuniversiteit Groningen); Ramirez, Andrea (TU Delft Energie and Industrie); Castillo, Édgar (Colombian Petroleum Institute); Faaij, Andre (Rijksuniversiteit Groningen; TNO)","","2022","The oil and gas industry is responsible for 6% of total global CO2 emissions, from exploration to downstream petrochemical production and account for another 50% when including the use of its products. Thus, this industry has a significant role in realising the target of net “zero” CO2 emissions by 2070, essential to limit global warming to 1.8 °C [2], as introduced under the Paris agreement. Currently, the interactions of an extensive set of individual and combined CO2 mitigation measures along the value chain and over time are poorly assessed. This paper aims to assess a bottom-up CO2 mitigation potential for a complex refinery, including portfolios of combined mitigation options, considering synergies, overlap, and interactions over time for more realistic insight into the costs and constraints of the mitigation portfolio. A total of 40 measures were identified, covering a wide range of technologies such as energy efficiency measures (EEM), carbon capture and storage (CCS), bio-oil co-processing, blue and green hydrogen (BH2, GH2), green electricity import, and electrification of refining processes linked to the transition of the Colombian energy systems. Five deployment pathways were assessed to achieve different specific targets: 1-base case scenario, 2-less effort, 3-maximum CO2 avoidance, 4-INDC, and 5-measures below 200 €/t CO2. Two scenarios (3 and 5) gave the highest GHG emission reduction potentials of 106% and 98% of refining process emissions, respectively. Although significant, it represent only around 13% of the life-cycle emissions when including upstream and final-use emissions of the produced fuels. Bio-oil co-processing options account for around 60% of the mitigation options portfolio, followed by CCS (23%), green electricity (7%) and green H2 (6%). The devised methodological approach in this study can also be applied to assess other energy-intensive industrial complexes and shed light on the bias for estimating CO2 mitigation potentials, especially when combining different mitigation options. This is turn is vital to define optimal transition pathways of industrial complexes.","CCS; Co-processing; CO mitigation; Electrification; Energy efficiency; Hydrogen; Refinery","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Energie and Industrie","","",""
"uuid:5bbbac0b-863e-4093-aa29-75e9b476285b","http://resolver.tudelft.nl/uuid:5bbbac0b-863e-4093-aa29-75e9b476285b","An evaluation of the information literacy of safety professionals","Guo, Y. (TU Delft Data-Intensive Systems; Fuzhou University); Tao, Jing (Fuzhou University); Yang, F. (TU Delft Safety and Security Science; Fuzhou University); Chen, C. (TU Delft Transport and Logistics; Southwest Petroleum University); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven)","","2022","Information literacy has gradually become one of the necessary qualities in current and future safety practices. The calculation and assessment of information literacy of safety professionals is an effective way to understand their information literacy level. This paper, therefore, aims to evaluate the information literacy level of safety management personnel, for improving their ability to comprehend safety language/technology/information. Based on the theory of safety information systems and the characteristics of safety professionals, this study develops an index system to assess the information literacy level of safety professionals. The index system consists of five indexes: safety information demand consciousness, safety information acquisition ability, safety information evaluation ability, safety information utilization ability, and information ethics. According to the analytic hierarchy process method, the weight of the index can be determined. This developed method was implemented to evaluate the safety information literacy level of 40 safety professionals from four different corporations. The quantitative results of the fuzzy evaluation are in good agreement with the qualitative analysis results, indicating that the index system has excellent applicability and can be applied to the evaluation of the information literacy level of safety professionals. Besides, recommendations are put forward to improve the information literacy of safety professionals.","analytic hierarchy process (AHP); Fuzzy comprehensive evaluation; information literacy (IL); Safety professional","en","journal article","","","","","","","","","","","Data-Intensive Systems","","",""
"uuid:b3b9d0de-2910-4569-91a4-29b6aa46113f","http://resolver.tudelft.nl/uuid:b3b9d0de-2910-4569-91a4-29b6aa46113f","Scaling Local Bottom-Up Innovations through Value Co-Creation","Marradi, C. (TU Delft Education AE); Mulder, I. (TU Delft Design Conceptualization and Communication)","","2022","Bottom-up initiatives of active citizens are increasingly demonstrating sustainable practices within local ecosystems. Local urban farming, sustainable agri-food systems, circular supply chains, and community fablabs are exemplary ways of tackling global challenges on a local level. Although promising in accelerating towards future-proof systems, these hyper-localized, bottom-up initiatives often struggle to take root in new contexts due to embedded socio-cultural challenges. With the premise that transformative capacity can be co-created to overcome such scaling challenges, the current work addresses the identified gap in scaling bottom-up initiatives into locally embedded ecosystems. While how to diffuse such practices across contexts is not straightforward, we introduce a three-phased approach enabling knowledge exchange and easing collaboration across cultures and ecosystems. The results allowed us to define common scalability criteria and to unfold scaling as a multi-step learning process to bridge identified cognitive and context gaps. The current article contributes to a broader activation of impact-driven scaling strategies and value creation processes that are transferable across contexts and deemed relevant for local ecosystems that are willing to co-create resilient socio-economic systems.","co-creation; cross-cultural learning; innovation ecosystems; mission-driven innovation; resilience; scaling strategies; urban food systems; value creation process","en","journal article","","","","","","","","","","","Education AE","","",""
"uuid:7d49e97b-bbc1-4c9a-818b-2d87ae9ce7a4","http://resolver.tudelft.nl/uuid:7d49e97b-bbc1-4c9a-818b-2d87ae9ce7a4","Corrosion and Microstructural Investigation on Additively Manufactured 316L Stainless Steel: Experimental and Statistical Approach","Maicas Esteve, H. (Norwegian University of Science and Technology (NTNU)); Taji, Iman (Norwegian University of Science and Technology (NTNU)); Wilms, M.E. (Shell); Gonzalez Garcia, Y. (TU Delft Team Yaiza Gonzalez Garcia); Johnsen, Roy (Norwegian University of Science and Technology (NTNU))","","2022","The use of metal additive manufacturing (AM) has strongly increased in the industry during the last years. More specifically, selective laser melting (SLM) is one of the most used techniques due to its numerous advantages compared to conventional processing methods. The purpose of this study is to investigate the effects of process parameters on the microstructural and corrosion properties of the additively manufactured AISI 316L stainless steel. Porosity, surface roughness, hardness, and grain size were studied for specimens produced with energy densities ranging from 51.17 to 173.91 J/mm3 that resulted from different combinations of processing parameters. Using experimental results and applying the Taguchi model, 99.38 J/mm3 was determined as the optimal energy density needed to produce samples with almost no porosity. The following analysis of variance ANOVA confirmed the scanning speed as the most influential factor in reducing the porosity percentage, which had a 74.9% contribution, followed by the position along the building direction with 22.8%, and finally, the laser energy with 2.3%. The influence on corrosion resistance was obtained by performing cyclic potentiodynamic polarization tests (CPP) in a 3.5 wt % NaCl solution at room temperature for different energy densities and positions (Z axis). The corrosion properties of the AM samples were studied and compared to those obtained from the traditionally manufactured samples. The corrosion resistance of the samples worsened with the increase in the percentage of porosity. The process parameters have consequently been optimized and the database has been extended to improve the quality of the AM-produced parts in which microstructural heterogeneities were observed along the building direction.","316L SS; Additive manufacturing; Pitting corrosion; Processing parameters; Selective laser melting","en","journal article","","","","","","","","","","","Team Yaiza Gonzalez Garcia","","",""
"uuid:8176ef00-566e-4e21-99eb-7fbfa678e68d","http://resolver.tudelft.nl/uuid:8176ef00-566e-4e21-99eb-7fbfa678e68d","Tracking traffic congestion and accidents using social media data: A case study of Shanghai","Chang, Haoliang (City University of Hong Kong); Li, L. (TU Delft Air Transport & Operations; City University of Hong Kong); Huang, Jianxiang (The University of Hong Kong); Zhang, Qingpeng (City University of Hong Kong); Chin, Kwai Sang (City University of Hong Kong)","","2022","Traffic congestion and accidents take a toll on commuters' daily experiences and society. Locating the venues prone to congestion and accidents and capturing their perception by public members is invaluable for transport policy-makers. However, few previous methods consider user perception toward the accidents and congestion in finding and profiling the accident- and congestion-prone areas, leaving decision-makers unaware of the subsequent behavior responses and priorities of retrofitting measures. This study develops a framework to identify and characterize the accident- and congestion-prone areas heatedly discussed on social media. First, we use natural language processing and deep learning to detect the accident- and congestion-relevant Chinese microblogs posted on Sina Weibo, a Chinese social media platform. Then a modified Kernel Density Estimation method considering the sentiment of microblogs is employed to find the accident- and congestion-prone regions. The results show that the 'congestion-prone areas' discussed on social media are mainly distributed throughout the historical urban core and the Northwest of Pudong New Area, in reasonably good agreements with actual congestion records. In contrast, the 'accident-prone areas' are primarily found in locations with severe accidents. Finally, the above venues are characterized in spatio-temporal and semantic aspects to understand the nature of the incidents and assess the priority level for mitigation measures. The outcomes can provide a reference for traffic authorities to inform resource allocation and prioritize mitigation measures in future traffic management.","Geographic information science; Kernel density estimation; Natural language processing; Social media data; Traffic accident; Traffic congestion","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Air Transport & Operations","","",""
"uuid:660dd698-298d-435c-a8c0-7891d1d789b5","http://resolver.tudelft.nl/uuid:660dd698-298d-435c-a8c0-7891d1d789b5","An accurate and efficient method to train classifiers for atrial fibrillation detection in ECGs: Learning by asking better questions","Wesselius, F.J. (Erasmus MC); van Schie, M.S. (Erasmus MC); de Groot, N.M.S. (TU Delft Signal Processing Systems; Erasmus MC); Hendriks, R.C. (TU Delft Signal Processing Systems)","","2022","Background: An increasing number of wearables are capable of measuring electrocardiograms (ECGs), which may help in early detection of atrial fibrillation (AF). Therefore, many studies focus on automated detection of AF in ECGs. A major obstacle is the required amount of manually labelled data. This study aimed to provide an efficient and reliable method to train a classifier for AF detection using large datasets of real-life ECGs. Method: Human-controlled semi-supervised learning was applied, consisting of two phases: the pre-training phase and the semi-automated training phase. During pre-training, an initial classifier was trained, which was used to predict the classes of new ECG segments in the semi-automated training phase. Based on the degree of certainty, segments were added to the training dataset automatically or after human validation. Thereafter, the classifier was retrained and this procedure was repeated. To test the model performance, a real-life telemetry dataset containing 3,846,564 30-s ECG segments of hospitalized patients (n = 476) and the CinC Challenge 2017 database were used. Results: After pre-training, the average F1-score on a hidden testing dataset was 89.0%. Furthermore, after the pre-training phase 68.0% of all segments in the hidden test set could be classified with an estimated probability of successful classification of 99%, providing an F1-score of 97.9% for these segments. During the semi-automated training phase, this F1-score showed little variation (97.3%–97.9% in the hidden test set), whilst the number of segments which could be automatically classified increased from 68.0% to 75.8% due to the enhanced training dataset. At the same time, the overall F1-score increased from 89.0% to 91.4%. Conclusions: Human-validated semi-supervised learning makes training a classifier more time efficient without compromising on accuracy, hence this method might be valuable in the automated detection of AF in real-life ECGs.","Algorithms; Atrial fibrillation; Classification; ECG signal Processing; Machine learning; Telemetry","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:2f219c22-e0c9-4559-acb0-0663b7af23ac","http://resolver.tudelft.nl/uuid:2f219c22-e0c9-4559-acb0-0663b7af23ac","Automated Customer Complaint Processing for Water Utilities Based on Natural Language Processing—Case Study of a Dutch Water Utility","Tian, Xin (KWR Water Research Institute); Vertommen, Ina (KWR Water Research Institute); Tsiami, Lydia (KWR Water Research Institute; National Technical University of Athens); van Thienen, Peter (KWR Water Research Institute); Paraskevopoulos, S. (TU Delft Sanitary Engineering; KWR Water Research Institute)","","2022","Most water utilities have to handle a substantial number of customer complaints every year. Traditionally, complaints are handled by skilled staff who know how to identify primary issues, classify complaints, find solutions, and communicate with customers. The effort associated with complaint processing is often great, depending on the number of customers served by a water utility. However, the rise of natural language processing (NLP), enabled by deep learning, and especially the use of deep recurrent and convolutional neural networks, has created new opportunities for comprehending and interpreting text complaints. As such, we aim to investigate the value of the use of NLP for processing customer complaints. Through a case study about the Water Utility Groningen in the Netherlands, we demonstrate that NLP can parse language structures and extract intents and sentiments from customer complaints. As a result, this study represents a critical and fundamental step toward fully automating consumer complaint processing for water utilities.","Artificial intelligence; Customer complaint processing; Natural language processing; Water sector","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:740b7942-b3eb-404b-889c-3d8da986994f","http://resolver.tudelft.nl/uuid:740b7942-b3eb-404b-889c-3d8da986994f","Capillary Effects in Fiber Reinforced Polymer Composite Processing: A Review","Teixidó, Helena (Swiss Federal Institute of Technology); Staal, Jeroen (Swiss Federal Institute of Technology); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies); Michaud, Véronique (Swiss Federal Institute of Technology)","","2022","Capillarity plays a crucial role in many natural and engineered systems, ranging from nutrient delivery in plants to functional textiles for wear comfort or thermal heat pipes for heat dissipation. Unlike nano- or microfluidic systems with well-defined pore network geometries and well-understood capillary flow, fiber textiles or preforms used in composite structures exhibit highly anisotropic pore networks that span from micron scale pores between fibers to millimeter scale pores between fiber yarns that are woven or stitched into a textile preform. Owing to the nature of the composite manufacturing processes, capillary action taking place in the complex network is usually coupled with hydrodynamics as well as the (chemo) rheology of the polymer matrices; these phenomena are known to play a crucial role in producing high quality composites. Despite its importance, the role of capillary effects in composite processing largely remained overlooked. Their magnitude is indeed rather low as compared to hydrodynamic effects, and it is difficult to characterize them due to a lack of adequate monitoring techniques to capture the time and spatial scale on which the capillary effects take place. There is a renewed interest in this topic, due to a combination of increasing demand for high performance composites and recent advances in experimental techniques as well as numerical modeling methods. The present review covers the developments in the identification, measurement and exploitation of capillary effects in composite manufacturing. A special focus is placed on Liquid Composite Molding processes, where a dry stack is impregnated with a low viscosity thermoset resin mainly via in-plane flow, thus exacerbating the capillary effects within the anisotropic pore network of the reinforcements. Experimental techniques to investigate the capillary effects and their evolution from post-mortem analyses to in-situ/rapid techniques compatible with both translucent and non-translucent reinforcements are reviewed. Approaches to control and enhance the capillary effects for improving composite quality are then introduced. This is complemented by a survey of numerical techniques to incorporate capillary effects in process simulation, material characterization and by the remaining challenges in the study of capillary effects in composite manufacturing.","capillary effects; composite processing; fiber reinforced polymers; liquid composite molding; textile preforms","en","review","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:c471496f-44d4-4439-962c-b5b186c11425","http://resolver.tudelft.nl/uuid:c471496f-44d4-4439-962c-b5b186c11425","Adaptive schemes for piecewise deterministic Monte Carlo algorithms","Bertazzi, A. (TU Delft Statistics); Bierkens, G.N.J.C. (TU Delft Statistics)","","2022","The Bouncy Particle sampler (BPS) and the Zig-Zag sampler (ZZS) are continuous time, non-reversible Monte Carlo methods based on piecewise deterministic Markov processes. Experiments show that the speed of convergence of these samplers can be affected by the shape of the target distribution, as for instance in the case of anisotropic targets. We propose an adaptive scheme that iteratively learns all or part of the covariance matrix of the target and takes advantage of the obtained information to modify the underlying process with the aim of increasing the speed of convergence. Moreover, we define an adaptive scheme that automatically tunes the refreshment rate of the BPS or ZZS. We prove ergodicity and a law of large numbers for all the proposed adaptive algorithms. Finally, we show the benefits of the adaptive samplers with several numerical simulations.","Adaptive Markov process Monte Carlo; bouncy particle sampler; ergodicity; piecewise deterministic Markov processes; zig-zag sampler","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Statistics","","",""
"uuid:1b81547a-8459-4e9b-b188-33200068389c","http://resolver.tudelft.nl/uuid:1b81547a-8459-4e9b-b188-33200068389c","A Markovian decision model of adaptive cancer treatment and quality of life","Bayer, Péter (Esplanade de l’université, Toulouse); Brown, Joel S. (Lee Moffitt Cancer Center and Research Institute); Dubbeldam, J.L.A. (TU Delft Mathematical Physics); Broom, Mark (City University London)","","2022","This paper develops and analyzes a Markov chain model for the treatment of cancer. Cancer therapy is modeled as the patient's Markov Decision Problem, with the objective of maximizing the patient's discounted expected quality of life years. Patients make decisions on the duration of therapy based on the progression of the disease as well as their own preferences. We obtain a powerful analytic decision tool through which patients may select their preferred treatment strategy. We illustrate the tradeoffs patients in a numerical example and calculate the value lost to a cohort in suboptimal strategies. In a second model patients may make choices to include drug holidays. By delaying therapy, the patient temporarily forgoes the gains of therapy in order to delay its side effects. We obtain an analytic tool that allows numerical approximations of the optimal times of delay.","Cancer therapy; Dynamic optimization; Markov decision processes; Quality of life","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Mathematical Physics","","",""
"uuid:abee2cd5-7b8e-4662-841b-24a3132c4e9b","http://resolver.tudelft.nl/uuid:abee2cd5-7b8e-4662-841b-24a3132c4e9b","Cardiac MR: From Theory to Practice","Ismail, Tevfik F. (King’s College London; Guy's and St Thomas’ NHS Foundation Trust); Strugnell, Wendy (Mater Hospital Brisbane); Coletti, C. (TU Delft ImPhys/Medical Imaging); Bozic, M. (TU Delft ImPhys/Medical Imaging; University Heidelberg); Weingärtner, S.D. (TU Delft ImPhys/Computational Imaging; TU Delft ImPhys/Medical Imaging); Hammernik, Kerstin (Technische Universität München; Imperial College London); Correia, Teresa (King’s College London; Centre of Marine Sciences); Küstner, Thomas (T€ubingen University Hospital)","","2022","Cardiovascular disease (CVD) is the leading single cause of morbidity and mortality, causing over 17. 9 million deaths worldwide per year with associated costs of over $800 billion. Improving prevention, diagnosis, and treatment of CVD is therefore a global priority. Cardiovascular magnetic resonance (CMR) has emerged as a clinically important technique for the assessment of cardiovascular anatomy, function, perfusion, and viability. However, diversity and complexity of imaging, reconstruction and analysis methods pose some limitations to the widespread use of CMR. Especially in view of recent developments in the field of machine learning that provide novel solutions to address existing problems, it is necessary to bridge the gap between the clinical and scientific communities. This review covers five essential aspects of CMR to provide a comprehensive overview ranging from CVDs to CMR pulse sequence design, acquisition protocols, motion handling, image reconstruction and quantitative analysis of the obtained data. (1) The basic MR physics of CMR is introduced. Basic pulse sequence building blocks that are commonly used in CMR imaging are presented. Sequences containing these building blocks are formed for parametric mapping and functional imaging techniques. Commonly perceived artifacts and potential countermeasures are discussed for these methods. (2) CMR methods for identifying CVDs are illustrated. Basic anatomy and functional processes are described to understand the cardiac pathologies and how they can be captured by CMR imaging. (3) The planning and conduct of a complete CMR exam which is targeted for the respective pathology is shown. Building blocks are illustrated to create an efficient and patient-centered workflow. Further strategies to cope with challenging patients are discussed. (4) Imaging acceleration and reconstruction techniques are presented that enable acquisition of spatial, temporal, and parametric dynamics of the cardiac cycle. The handling of respiratory and cardiac motion strategies as well as their integration into the reconstruction processes is showcased. (5) Recent advances on deep learning-based reconstructions for this purpose are summarized. Furthermore, an overview of novel deep learning image segmentation and analysis methods is provided with a focus on automatic, fast and reliable extraction of biomarkers and parameters of clinical relevance.","cardiovascular MR; CMR protocol; deep learning; image processing; image reconstruction; imaging acceleration; quantitative imaging; sequence design","en","review","","","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:8f08811e-9e1c-4052-854e-6f2592e74097","http://resolver.tudelft.nl/uuid:8f08811e-9e1c-4052-854e-6f2592e74097","How innovations in methodology offer new prospects for volume electron microscopy","Kievits, A.J. (TU Delft ImPhys/Microscopy Instrumentation & Techniques); Lane, R. (TU Delft ImPhys/Microscopy Instrumentation & Techniques); Carroll, E.C.M. (TU Delft ImPhys/Microscopy Instrumentation & Techniques); Hoogenboom, J.P. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","","2022","Detailed knowledge of biological structure has been key in understanding biology at several levels of organisation, from organs to cells and proteins. Volume electron microscopy (volume EM) provides high resolution 3D structural information about tissues on the nanometre scale. However, the throughput rate of conventional electron microscopes has limited the volume size and number of samples that can be imaged. Recent improvements in methodology are currently driving a revolution in volume EM, making possible the structural imaging of whole organs and small organisms. In turn, these recent developments in image acquisition have created or stressed bottlenecks in other parts of the pipeline, like sample preparation, image analysis and data management. While the progress in image analysis is stunning due to the advent of automatic segmentation and server-based annotation tools, several challenges remain. Here we discuss recent trends in volume EM, emerging methods for increasing throughput and implications for sample preparation, image analysis and data management.","data management; image analysis; image processing; MB-SEM; methodology development; volume EM","en","review","","","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:5f84c265-c0c3-4317-b613-e4f56c1b211b","http://resolver.tudelft.nl/uuid:5f84c265-c0c3-4317-b613-e4f56c1b211b","Hierarchically patterned multiphase steels created by localised laser treatments","Breukelman, H. J. (Student TU Delft); Santofimia, Maria Jesus (TU Delft Team Maria Santofimia Navarro); Hidalgo Garcia, J. (TU Delft Team Kevin Rossi; Universidad de Castilla-La Mancha)","","2022","The realisation of sophisticated hierarchically patterned multiphase steels has the potential to enable unprecedented properties in engineering components. The present work explores the controlled creation of patterned multiphase steels in which the patterns are defined by two different crystal structures: face centre cubic or fcc (austenite) and body centre cubic or bcc (martensite). These austenite/martensite mesostructures are generated by solid–solid phase transformations during the application of localised laser heat treatments in a Fe-Ni-C alloy. In particular, four patterned configurations are analysed in this work consisting of one or two horizontal austenite line structures imprinted in a base of as-quenched or tempered martensite. Digital image correlation analysis during tensile testing of the developed materials showed that both the strength of the base martensite and the mesostructure at the gauge have a strong effect on the resulting properties. Clear differences were observed among the configurations in strain partitioning, hardening of the different constituents and failure. The uniform elongation and tensile strength are increased with respect to that of the reference martensite and austenite, respectively. Concepts explored in this work can be extended to more complex patterns and other base microstructures, opening novel strategies to engineer properties in steel and other alloys.","Austenite; Flash heating; Laser material processing; Local heat treatment; Martensite; Patterned microstructures","en","journal article","","","","","","","","","","","Team Maria Santofimia Navarro","","",""
"uuid:50eb3b98-cacf-45d0-a823-923647d68e03","http://resolver.tudelft.nl/uuid:50eb3b98-cacf-45d0-a823-923647d68e03","Unraveling the effect of variable natural gas feedstock on an industrial ammonia process","Rokhayati, Erna (The University of Manchester); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering; The University of Manchester)","","2022","Ammonia plays critical role as the second most produced chemical commodity with around 80% used in producing nitrogen-based fertilizer. Considering the decline of reserves of fossil-based feedstocks it is imperative to shift towards greener alternatives. However, such green ammonia processes are far from being economically viable. This makes natural gas-based ammonia synthesis the best available technology currently, but this faces critical difficulties as natural gas supply could widely vary due to declining reserve or changing sources, posing another key challenge to improve the efficiency of affected ammonia production. This study is the first to investigate the effect of variable natural gas composition (within the range of 83–99.99% vol dry methane; towards lean gas) on an industrial ammonia production process with maintained key operating parameters value such as steam to carbon ratio (S/C), hydrogen over nitrogen ratio (H/N), etc. The sensitivity analysis shows that sustained energy efficiency of the process is possible, confirming the conventional ammonia plant's ability to withstand changes in feedstock and fuel supply. In addition, lean gas yielded a positive impact on the raw material intensity and CO2 emissions with average reductions of 1.17% and 1.79% per each 4% methane content increase, respectively.","Energy savings; Process design; Process integration; Process optimization; Process simulation","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:6e09c925-506d-41c5-a4e1-36369ea905e4","http://resolver.tudelft.nl/uuid:6e09c925-506d-41c5-a4e1-36369ea905e4","Virtual Reality Tool for Human-Machine Interface Evaluation and Development (VRHEAD)","Aldea, Anna (SWOV Institute for Road Safety Research); Tinga, Angelica M. (SWOV Institute for Road Safety Research); van Zeumeren, I.M. (TU Delft Design Aesthetics); van Nes, C.N. (TU Delft Applied Ergonomics and Design; SWOV Institute for Road Safety Research); Aschenbrenner, D. (Aalen University, Aalen)","","2022","Higher levels of vehicle automation come with new challenges for designing safe systems. The Human Machine-Interface (HMI) plays a key role in mediating the interaction between the human driver and vehicle automation. By providing the driver with appropriate feedback, the HMI has the potential to increase mode awareness and situational awareness. For the development of appropriate HMI solutions, usability assessments are essential. Immersive Virtual Reality (VR) technology enables researchers and designers to construct realistic virtual prototypes and immersive evaluation scenarios with less time and resources. The current study presents a VR evaluation tool called VRHEAD, which is designed to facilitate an iterative design process and support the rapid implementation of virtual prototypes to evaluate of an automated vehicle's HMI. Initial results indicate that VRHEAD is a promising approach for the rapid implementation and evaluation of design concepts. The use of VR tools, like VRHEAD, can reduce the time and costs associated with developing high-fidelity prototypes and provide more flexibility in modifying a design according to new research findings, thus broadening the exploration of the HMI design space.","automated driving; design evaluation; design for experiments; HMI; Human Centered Design; human-machine interaction; iterative design process; mode awareness; rapid prototyping; virtual reality","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Design Aesthetics","","",""
"uuid:2000d7f9-2bb5-425f-bbb6-ee7c27aefe9f","http://resolver.tudelft.nl/uuid:2000d7f9-2bb5-425f-bbb6-ee7c27aefe9f","A stochastic model of geomorphic risk due to episodic river aggradation and degradation","Chen, Tzu Yin Kasha (National Taiwan University); Hung, Chi Yao (National Chung Hsing University); Chiang, Y.-C. (TU Delft Structural Design & Mechanics); Hsieh, Meng Long (National Chung Cheng University); Capart, Hervé (National Taiwan University)","","2022","In some steep valleys, flood-induced changes in river bed elevation pose significantly greater risks to infrastructure than floodwaters alone. Over the short term, the river may aggrade or degrade by several meters during a single flood. Whereas floodwaters recede after each event, moreover, riverbed changes add up over successive floods. To quantify the resulting geomorphic risk and its evolution over time, we propose in this paper a new stochastic model of river bed elevation change. The bed is assumed to rise and drop according to a random walk, driven by the composition of two gamma processes that respectively pace the hydrologic forcing and the geomorphic response. The model can therefore incorporate various sources of uncertainty, associated with precipitation and debris flow activity within the contributing watershed. To test the model, we apply it to a highly active montane river, the Laonong River in southwestern Taiwan. Model calibration is achieved from a combination of long and short term data, including radiocarbon-dated deposits and modern river records. The modelled distributions fit the data well, including the likelihood of extreme changes. The model also produces a reasonable hindcast of the geomorphic damage suffered over the last ten years by Highway 20, a vulnerable road link sited along the river, and can be used to forecast future geomorphic risk.","Flood impacts; Geological uncertainty; Risk assessment; River aggradation; Stochastic process","en","journal article","","","","","","","","","","","Structural Design & Mechanics","","",""
"uuid:4d77291b-a99f-4d9a-8f6e-ecb7452b6fea","http://resolver.tudelft.nl/uuid:4d77291b-a99f-4d9a-8f6e-ecb7452b6fea","Novel intensified process for ethanolamines production using reactive distillation and dividing-wall column technologies","Devaraja, Devnarayan (The University of Manchester); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering; The University of Manchester)","","2022","Monoethanolamine is an essential chemical used as feedstock in the production of detergents, emulsifiers, pharmaceuticals, polishes, corrosion inhibitors, and chemical intermediates. It is produced industrially by treating ethylene oxide with aqueous ammonia, but the reaction also leads to di- and tri-ethanolamine as less desired by-products. This study is the first to propose an intensified process for the production of ethanolamines combining reactive distillation (RD) and dividing-wall column (DWC) technologies. The process was optimized to maximize the MEA selectivity (over 71%), as the ratio of the products can be controlled by the stoichiometry of the reactants. Rigorous process simulations and sensitivity analysis of key process parameters have been carried out using Aspen Plus, for a plant with a production capacity of 11.5 ktpy ethanolamines. The overall process has been designed to produce ethanolamines with minimal energy utilization and reduced capital cost. Economic and sustainability analysis have been carried out showing the key benefits of the proposed process as compared to the conventional one used in industry: CapEx reduction of 7.3%, OpEx savings of 42%, and TAC improvements of 31.3%.","Energy savings; Optimization; Process design; Process intensification; Process simulation","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:3cf28e84-0f7e-463b-b421-980987d210d4","http://resolver.tudelft.nl/uuid:3cf28e84-0f7e-463b-b421-980987d210d4","On the Estimation of Vector Wind Profiles Using Aircraft-Derived Data and Gaussian Process Regression","Marinescu, Marius (Universidad Rey Juan Carlos); Olivares, Alberto (Universidad Rey Juan Carlos); Staffetti, Ernesto (Universidad Rey Juan Carlos); Sun, Junzi (TU Delft Control & Simulation)","","2022","This work addresses the problem of vertical wind profile online estimation at a given location. Specifically, the north and east components of the wind are continuously estimated as functions of time and altitude at two waypoints used for landing on the Adolfo Suarez Madrid-Barajas airport. A continuous nowcast of the wind profile is performed in which wind observations are derived from the aircraft states and assimilated into the model. It is well known that wind is one of the utmost contributors to uncertainties in the current and future paradigm of Air Traffic Management. Accurate wind information is key in continuous climb and descent operations, spacing, four dimensional trajectory-based operations, and aircraft performance studies, among others. In this work, wind data are obtained indirectly from the aircraft’s states broadcast by the Mode S and ADS-B aircraft surveillance systems. The Gaussian process regression is adapted to this framework and used to solve the problem. The presented method allows to construct a complete vector wind profile at any specific position that is continuous in time and altitude; namely, there is no need for grid points and time discretisation. The Gaussian process regression is a very flexible estimator which is statistically consistent under general conditions, meaning that it converges to the underground truth when more and more data are dispensed. In addition, the Gaussian process regression approach provides the whole probability distribution of any particular estimation, allowing confidence intervals to be computed naturally. In the case study presented in this paper, in which the wind is constantly estimated, the Gaussian process regression model is iteratively updated every 15 min to capture possible changes in the wind behaviour and give an estimation of the wind profile every half a minute. The method has been validated using a test dataset, achieving a reduction of 50% of the prediction uncertainty in comparison to a baseline model. Moreover, two popular wind profile estimators based on the Kalman filter are also implemented for the sake of comparison. The Kalman filter outperforms the baseline model, but it does not outperform the Gaussian process regression with errors higher by around 35%, in comparison. The obtained results show that the Gaussian process regression of aircraft-derived data reliably nowcast the wind state, which is key in Air Traffic Management.","ADS-B; Air Traffic Management; Gaussian process regression; Kalman filter; Mode S; wind estimation","en","journal article","","","","","","","","","","","Control & Simulation","","",""
"uuid:1ed233be-f1ac-4d19-a4b8-5a557f9beafe","http://resolver.tudelft.nl/uuid:1ed233be-f1ac-4d19-a4b8-5a557f9beafe","Approximations of Piecewise Deterministic Markov Processes and their convergence properties","Bertazzi, A. (TU Delft Statistics); Bierkens, G.N.J.C. (TU Delft Statistics); Dobson, P. (TU Delft Statistics)","","2022","Piecewise deterministic Markov processes (PDMPs) are a class of stochastic processes with applications in several fields of applied mathematics spanning from mathematical modelling of physical phenomena to computational methods. A PDMP is specified by three characteristic quantities: the deterministic motion, the law of the random event times, and the jump kernels. The applicability of PDMPs to real world scenarios is currently limited by the fact that these processes can be simulated only when these three characteristics of the process can be simulated exactly. In order to overcome this problem, we introduce discretisation schemes for PDMPs which make their approximate simulation possible. In particular, we design both first order and higher order schemes that rely on approximations of one or more of the three characteristics. For the proposed approximation schemes we study both pathwise convergence to the continuous PDMP as the step size converges to zero and convergence in law to the invariant measure of the PDMP in the long time limit. Moreover, we apply our theoretical results to several PDMPs that arise from the computational statistics and mathematical biology literature.","Coupling; Numerical approximation; Piecewise Deterministic Markov Processes; Weak error","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:c82dcc5a-6446-4298-85e4-bcae7e7e2ec4","http://resolver.tudelft.nl/uuid:c82dcc5a-6446-4298-85e4-bcae7e7e2ec4","Continuous-flow CvFAP photodecarboxylation of palmitic acid under environmentally friendly conditions","Benincá, Luiza A.D. (Universidade Federal do Rio de Janeiro); França, Alexandre S. (Universidade Federal do Rio de Janeiro); Brêda, Gabriela C. (Universidade Federal do Rio de Janeiro); Leão, Raquel A.C. (Universidade Federal do Rio de Janeiro); Almeida, Rodrigo V. (Universidade Federal do Rio de Janeiro); Hollmann, F. (TU Delft BT/Biocatalysis); de Souza, Rodrigo O.M.A. (Universidade Federal do Rio de Janeiro)","","2022","The fatty acid photodecarboxylase from Chlorella variabilis NC64A (CvFAP) promotes the elimination of CO2 from fatty acids (Cn) producing the corresponding hydrocarbon (Cn-1). Therefore, this enzyme is of great biotechnological interest since it can be used in alternative biofuel production routes matching the concept of green chemistry. However, due to its recent discovery, this reaction still requires optimizations, which was the focus of the present work together with the application of continuous flow system. The results in batch reactors showed the importance of using high power LED lamps (300 W) to reduce the reaction time for full conversion (30 min, >99%). In another approach, a continuous flow system demonstrated high potential, as it enabled full conversion with a half concentration of enzyme extract in a very short residence time of 15 min. Furthermore, the use of less expensive and sustainable light sources, not previously reported for reactions with CvFAP, were evaluated with full conversion (>99%) after 1 h for continuous flow reactions using 300 W common white LED lamp and based preliminary batch reactions investigations using direct sunlight. Thus, important advances and new perspectives for CvFAP photodecarboxylation reactions could be achieved with the present report.","Biocatalysis; Continuous flow; Green chemistry; Photodecarboxylation; Process intensification","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-01-06","","","BT/Biocatalysis","","",""
"uuid:f4cc2b8f-b805-4d11-96e6-6a61a19037eb","http://resolver.tudelft.nl/uuid:f4cc2b8f-b805-4d11-96e6-6a61a19037eb","Large-Scale Wildfire Mitigation Through Deep Reinforcement Learning","Altamimi, Abdulelah (The Pennsylvania State University); Lagoa, Constantino (The Pennsylvania State University); Borges, José G. (University of Lisbon); McDill, Marc E. (The Pennsylvania State University); Andriotis, C. (TU Delft Structural Design & Mechanics); Papakonstantinou, K. G. (The Pennsylvania State University)","","2022","Forest management can be seen as a sequential decision-making problem to determine an optimal scheduling policy, e.g., harvest, thinning, or do-nothing, that can mitigate the risks of wildfire. Markov Decision Processes (MDPs) offer an efficient mathematical framework for optimizing forest management policies. However, computing optimal MDP solutions is computationally challenging for large-scale forests due to the curse of dimensionality, as the total number of forest states grows exponentially with the numbers of stands into which it is discretized. In this work, we propose a Deep Reinforcement Learning (DRL) approach to improve forest management plans that track the forest dynamics in a large area. The approach emphasizes on prevention and mitigation of wildfire risks by determining highly efficient management policies. A large-scale forest model is designed using a spatial MDP that divides the square-matrix forest into equal stands. The model considers the probability of wildfire dependent on the forest timber volume, the flammability, and the directional distribution of the wind using data that reflects the inventory of a typical eucalypt (Eucalyptus globulus Labill) plantation in Portugal. In this spatial MDP, the agent (decision-maker) takes an action at one stand at each step. We use an off-policy actor-critic with experience replay reinforcement learning approach to approximate the MDP optimal policy. In three different case studies, the approach shows good scalability for providing large-scale forest management plans. The results of the expected return value and the computed DRL policy are found identical to the exact optimum MDP solution, when this exact solution is available, i.e., for low dimensional models. DRL is also found to outperform a genetic algorithm (GA) solutions which were used as benchmarks for large-scale model policy.","deep reinforcement learning; dynamic programming; forest management; Markov Decision Process; wildfire mitigation","en","journal article","","","","","","","","","","","Structural Design & Mechanics","","",""
"uuid:93439ac0-5f86-407c-bf87-6e9b6fa679f1","http://resolver.tudelft.nl/uuid:93439ac0-5f86-407c-bf87-6e9b6fa679f1","Long-term viscoelastic deformation monitoring of a concrete dam: A multi-output surrogate model approach for parameter identification","Lin, Chaoning (Hohai University); Li, Tongchun (Hohai University); Chen, Siyu (Nanjing Hydraulic Research Institute); Yuan, Li (Hohai University); van Gelder, P.H.A.J.M. (TU Delft Safety and Security Science); Yorke-Smith, N. (TU Delft Algorithmics)","","2022","Dam safety monitoring has become an important topic and is critical for evaluating a dam's safety status. This study focuses on identifying the mechanical properties of a concrete dam from long-term viscoelastic deformation monitoring data. A novel inversion framework is proposed in which a surrogate model, instead of the finite element model, is placed inside the optimization loop. First, a multi-output surrogate model based on Gaussian process is trained by using data from a finite element simulation in the creep regime. In order to efficiently create a high-precision and reliable surrogate model, three test instances are conducted to investigate the impact of sample size, parameter range and output quantity on the performance of the surrogate model. Subsequently, a meta-heuristic optimization, multi-verse optimizer, is employed to identify the unknown viscoelastic parameters. The results illustrate that the identified properties allow predictions on dam displacement which are consistent with the monitoring data. Compared with the traditional inversion method based on finite element modelling, the proposed inversion method based on the multi-output surrogate model not only achieves accurate estimation of mechanical parameters but also greatly improves computational efficiency.","Concrete dam; Inverse analysis; Multi-output Gaussian process; Surrogate model; Viscoelasticity","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-01-09","","","Safety and Security Science","","",""
"uuid:f0090251-0935-4a8c-bec8-5cacb424aec6","http://resolver.tudelft.nl/uuid:f0090251-0935-4a8c-bec8-5cacb424aec6","Machine Learning-Assisted probabilistic fatigue evaluation of Rib-to-Deck joints in orthotropic steel decks","Heng, J. (TU Delft Steel & Composite Structures; Shenzhen University); Zheng, Kaifeng (Southwest Jiaotong University); Feng, Xiaoyang (Southwest Jiaotong University); Veljkovic, M. (TU Delft Steel & Composite Structures); Zhou, Zhixiang (Shenzhen University)","","2022","This study integrates the fatigue test and numerical prediction to derive a comprehensive probability-stress-life (P-S-N) curve for rib-to-deck (RD) welded joints in orthotropic steel decks. Fatigue tests of RD joints are conducted to measure fatigue strength and crack growth data. Based on the test, a probabilistic fatigue crack growth (PFCG) model is established to predict the distribution of fatigue life under various stress ranges. Two machine learning tools are adopted to assist the PFCG model-based prediction, i.e., the Gaussian process regression (GPR) and dynamic Bayesian network (DBN). The GPR is used to train a surrogate model solving stress intensity factors for the PFCG prediction, using 2,000 samples generated from finite element (FE) analyses. The trained model is then validated by a new dataset of 100 FE samples. An adapted DBN model is proposed to update the PFCG model with the fatigue crack growth data measured from ten specimens. According to the result, the application of GPR can reduce the solution cost of the PFCG prediction by approximately 1,875 times. Compared with the prior PFCG model, the updated posterior model shows an improved agreement with the test data, i.e., the maximum difference in fatigue strength between model prediction and test data decreases from 12% to 3%. Based on the posterior PFCG model, the P-S-N curve of RD joints is statistically derived using sufficient numerical samples.","dynamic Bayesian network; Gaussian process regression; Orthotropic steel decks; Probabilistic fatigue assessment; Rib-to-deck joints","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-12-09","","","Steel & Composite Structures","","",""
"uuid:1f7a8bbb-4cfa-4153-8b68-14a953211cda","http://resolver.tudelft.nl/uuid:1f7a8bbb-4cfa-4153-8b68-14a953211cda","Recent advances to accelerate purification process development: A review with a focus on vaccines","Keulen, D. (TU Delft BT/Bioprocess Engineering); Geldhof, Geoffroy (GSK Vaccines, Rixensart); Bussy, Olivier Le (GSK Vaccines, Rixensart); Pabst, Martin (TU Delft BT/Environmental Biotechnology); Ottens, M. (TU Delft BT/Design and Engineering Education)","","2022","The safety requirements for vaccines are extremely high since they are administered to healthy people. For that reason, vaccine development is time-consuming and very expensive. Reducing time-to-market is key for pharmaceutical companies, saving lives and money. Therefore the need is raised for systematic, general and efficient process development strategies to shorten development times and enhance process understanding. High throughput technologies tremendously increased the volume of process-related data available and, combined with statistical and mechanistic modeling, new high throughput process development (HTPD) approaches evolved. The introduction of model-based HTPD enabled faster and broader screening of conditions, and furthermore increased knowledge. Model-based HTPD has particularly been important for chromatography, which is a crucial separation technique to attain high purities. This review provides an overview of downstream process development strategies and tools used within the (bio)pharmaceutical industry, focusing attention on (protein subunit) vaccine purification processes. Subsequently high throughput process development and other combinatorial approaches are discussed and compared according to their experimental effort and understanding. Within a growing sea of information, novel modeling tools and artificial intelligence (AI) gain importance for finding patterns behind the data and thereby acquiring a deeper process understanding.","Artificial intelligence; Chromatography; Downstream processing; Model-based high throughput process development; Vaccine purification processes","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:0d0a1144-2971-4904-8fcb-86bff67522bc","http://resolver.tudelft.nl/uuid:0d0a1144-2971-4904-8fcb-86bff67522bc","Radar Perception for Autonomous Unmanned Aerial Vehicles: A Survey","Corradi, Federico (Stichting IMEC Nederland); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2022","The advent of consumer and industrial Unmanned Aerial Vehicles (UAVs), commonly referred to as drones, has opened business opportunities in many fields, including logistics, smart agriculture, inspection, surveillance, and construction. In addition, the autonomous operations of UAVs reduce risks by minimizing the time spent by human workers in harsh environments and lowering costs by automating tasks. For reliability and safety, the drones must sense and avoid potential obstacles and must be capable of safely navigating in unknown environments. UAVs' perception requires reliability in various settings, such as high dust levels, humidity, intense sun glare, dark, and fog that can severely obstruct many conventional sensing methods. Radar systems have unique strengths; they can reliably estimate how far an object is and measure its relative speed via the Doppler effect. In addition, because radars exploit radio waves to sense, they perform well in rain, fog, snow, or smoky environments. This stands in contrast to optical technologies, such as cameras or LIght Detection And Ranging (Lidars), which are more susceptible to the same challenges as the human eye. This survey paper aims to address the signal processing challenges for the exploitation of radar systems in unmanned aerial vehicles for advanced perception, considering recent integration trends and technology capabilities. The focus is on signal processing techniques for low-cost and power-efficient radar sensors, which operate onboard the UAVs in real-Time to ensure their needs in terms of perception, situational awareness, and navigation. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, safe, and autonomous way for UAVs to perceive and interact with the world.","deep learning; drone sensory perception; micro-Doppler processing; radar odometry; radar sensing","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:f57bb145-1039-4f10-b380-b9cb62fc50d5","http://resolver.tudelft.nl/uuid:f57bb145-1039-4f10-b380-b9cb62fc50d5","Coevolution of machine learning and process-based modelling to revolutionize Earth and environmental sciences: A perspective","Razavi, Saman (Global Institute for Water Security; University of Saskatchewan); Hannah, David M. (University of Birmingham); Elshorbagy, Amin (University of Saskatchewan); Kumar, Sujay (NASA Goddard Space Flight Center); Marshall, Lucy (University of New South Wales); Solomatine, D.P. (TU Delft Water Resources; IHE Delft Institute for Water Education; Water Problems Institute of Russian Academy of Sciences); Dezfuli, Amin (NASA Goddard Space Flight Center); Sadegh, Mojtaba (Boise State University); Famiglietti, James (Global Institute for Water Security)","","2022","Machine learning (ML) applications in Earth and environmental sciences (EES) have gained incredible momentum in recent years. However, these ML applications have largely evolved in ‘isolation’ from the mechanistic, process-based modelling (PBM) paradigms, which have historically been the cornerstone of scientific discovery and policy support. In this perspective, we assert that the cultural barriers between the ML and PBM communities limit the potential of ML, and even its ‘hybridization’ with PBM, for EES applications. Fundamental, but often ignored, differences between ML and PBM are discussed as well as their strengths and weaknesses in light of three overarching modelling objectives in EES, (1) nowcasting and prediction, (2) scenario analysis, and (3) diagnostic learning. The paper ponders over a ‘coevolutionary’ approach to model building, shifting away from a borrowing to a co-creation culture, to develop a generation of models that leverage the unique strengths of ML such as scalability to big data and high-dimensional mapping, while remaining faithful to process-based knowledge base and principles of model explainability and interpretability, and therefore, falsifiability.","artificial intelligence; deep learning; machine learning; modelling objective; policy support; predication; process-based modelling; scenarios; scientific discovery","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:cde47057-8994-4f86-9e08-c487452526f6","http://resolver.tudelft.nl/uuid:cde47057-8994-4f86-9e08-c487452526f6","Exploring the Influence of the Visual Attributes of Kaplan’s Preference Matrix in the Assessment of Urban Parks: A Discrete Choice Analysis","Shayestefar, Marjan (Golestan University); Pazhouhanfar, Mahdieh (Golestan University); van Oel, C.J. (TU Delft Design & Construction Management); Grahn, Patrik (Swedish University of Agricultural Sciences)","","2022","A significant majority of the literature on natural environments and urban green spaces justifies the preferences that people have for natural environments using four predictors defined by Kaplan’s preference matrix theory, namely coherence, legibility, complexity, and mystery. However, there are no studies implicitly focusing on the visual attributes assigned to each of these four predictors. Thus, the aim of this study was to explore the influence of nine visual attributes derived from the four predictors of Kaplan’s matrix on people’s preferences in the context of urban parks. A discrete choice experiment was used to obtain responses from a sample of 396 students of Golestan University. Students randomly evaluated their preferences towards a set of potential scenarios with urban park images. The results of a random parameter logit analysis showed that all of the attributes of complexity (variety of elements, number of colors, and organization of elements) and one attribute each of coherence (uniformity), mystery (visual access), and legibility (distinctive elements) affect students’ choices for urban parks, while one attribute each of mystery (physical access) and legibility (wayfinding) did not affect the choices. Furthermore, the results indicated a preference for heterogeneity of the attributes. The findings of this study can provide instructions for designing parks.","information processing theory; landscape design; multinomial logit model; predictors of preference","en","journal article","","","","","","","","","","","Design & Construction Management","","",""
"uuid:1bc69351-0de3-4a65-84f7-5441e58f7bc3","http://resolver.tudelft.nl/uuid:1bc69351-0de3-4a65-84f7-5441e58f7bc3","Quantum Mechanics, Ambiguity and Design: Towards a Framework","Verstegen, Bas (Student TU Delft); Ozcan Vieira, E. (TU Delft Design Aesthetics); Delle Monache, S. (TU Delft Design Aesthetics)","","2022","Quantum Mechanics could have fundamental impact on design models and measurement. Quantum mechanics allows us to fill in the blanks of classical models of design, through its ability to explain ambiguous states of design. An ambiguous state is where design exists in between two binary states, as a superposition. Designers are most likely to be unfamiliar with quantum mechanics, as well as the subject of quantum mechanics being complex and sometimes contradictory to human scale mechanics. By discussing the opportunities of quantum mechanics for design, we are proposing a framework to model and measure ambiguous dimensions of design through quantum superpositions. The proposed framework includes the dimensions for the directionality of design (convergence or divergence), the degree of design embodiment (from low to high) and the decision-making of the designer (yes to no). Once the designer attempts the measurement of a superposition, a binary state can be distilled. For the act of designing, filling in the blanks is equal to sculpting away superposed states. In this philosophy, to design is to measure. This early stage research raises areas of opportunities and suggests further research directions for quantum mechanics and design.","quantum; Creativity; Design process; ambiguity","en","conference paper","Association for Computing Machinery (ACM)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-12-20","","","Design Aesthetics","","",""
"uuid:dec928cf-d678-4af0-a32e-c4dd7e13232e","http://resolver.tudelft.nl/uuid:dec928cf-d678-4af0-a32e-c4dd7e13232e","Gaussian Processes for Advanced Motion Control","Poot, Maurice (Eindhoven University of Technology); Portegies, Jim (Eindhoven University of Technology); Mooren, Noud (Eindhoven University of Technology); van Haren, Max (Eindhoven University of Technology); van Meer, Max (Eindhoven University of Technology); Oomen, T.A.E. (TU Delft Team Jan-Willem van Wingerden; Eindhoven University of Technology)","","2022","Machine learning techniques, including Gaussian processes (GPs), are expected to play a significant role in meeting speed, accuracy, and functionality requirements in future data-intensive mechatronic systems. This paper aims to reveal the potential of GPs for motion control applications. Successful applications of GPs for feedforward and learning control, including the identification and learning for noncausal feedforward, position-dependent snap feedforward, nonlinear feedforward, and GP-based spatial repetitive control, are outlined. Experimental results on various systems, including a desktop printer, wirebonder, and substrate carrier, confirmed that data-based learning using GPs can significantly improve the accuracy of mechatronic systems.","feedforward control; gaussian processes; learning control","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-11-01","","","Team Jan-Willem van Wingerden","","",""
"uuid:5b9ffeec-0a4d-4786-8b02-2dd880646bed","http://resolver.tudelft.nl/uuid:5b9ffeec-0a4d-4786-8b02-2dd880646bed","Spectral analysis of the zigzag process","Bierkens, G.N.J.C. (TU Delft Statistics); Verduyn Lunel, Sjoerd M. (Universiteit Utrecht)","","2022","The zigzag process is a variant of the telegraph process with position dependent switching intensities. A characterization of the L2-spectrum for the generator of the one-dimensional zigzag process is obtained in the case where the marginal stationary distribution on R is unimodal and the refreshment intensity is zero. Sufficient conditions are obtained for a spectral mapping theorem, mapping the spectrum of the generator to the spectrum of the corresponding Markov semigroup. Furthermore results are obtained for symmetric stationary distributions and for perturbations of the spectrum, in particular for the case of a non-zero refreshment intensity. In the examples we consider (including a Gaussian target distribution) a slight increase of the refreshment intensity above zero results in a larger L2-spectral gap, corresponding to an improved convergence in L2.","Exponential ergodicity; Markov semigroup; Non-reversible Markov process; Perturbation theory; Piecewise deterministic Markov process; Spectral theory; Telegraph process; Zigzag process","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:b25f24df-211f-4bf3-8025-79908b5f7c25","http://resolver.tudelft.nl/uuid:b25f24df-211f-4bf3-8025-79908b5f7c25","Interpretation of run-of-mine comminution and recovery parameters using multi-element geochemical data clustering","van Duijvenbode, J.R. (TU Delft Resource Engineering); Cloete, Louis M. (AngloGold Ashanti South Africa, Johannesburg); Soleymani Shishvan, M. (TU Delft Resource Engineering); Buxton, M.W.N. (TU Delft Resource Engineering)","","2022","Multi-element (ME) datasets provide comprehensive geochemical signatures of an orebody and are commonly used to gain insight into the mineralogy, lithology, alteration patterns and to identify target-pathfinders. However, little effort is made in using these data to explain comminution or recovery characteristics. This paper describes an agglomerative hierarchical clustering approach applied to ME data from the Tropicana Gold Mine, Australia, and investigates the relationship between the resultant classes and run-of-mine comminution and recovery parameters. First, it is demonstrated how an industry scale ME dataset is prepared for clustering. The preparation consists of verifying the absence of interlaboratory and intralaboratory bias between measurements, centred log-ratio transformation (clr), normalisation and principal component analysis (PCA). Afterwards, the first case study indicate that the clustering separation is primarily driven by geochemical differences caused by major rock-forming mineral signatures (felsic vs mafic, alteration vs no alteration, chert or quartz lithologies, unmineralised vs mineralised material). This case study separates the ME dataset into five unmineralised and two Au-mineralised material classes. The second case study continues with the two identified mineralised material classes and further separates these samples into five new classes. These classes are explored geochemically and by using the spatial context (within domains) better matched with metallurgical test results. It is found that domain-related material class proportions assist in interpreting different processing proxies such as the Equotip hardness (Leeb), Bond Work index (BWi), Axb, and processing recovery and reagent consumption. Knowledge of the processing parameters per domain and class composition can be used to infer such characteristics in the absence of standard metallurgical tests. This new approach of gaining insights into comminution and recovery parameters through geochemical analysis demonstrates the benefit of the conceptualised material fingerprinting concept.","Agglomerative hierarchical clustering; Comminution and recovery parameters; Four-acid digestive multi-element ICP data; Geochemistry; Mineral processing; Mining; Tropicana Gold Mine","en","journal article","","","","","","","","","","","Resource Engineering","","",""
"uuid:e3ab4443-d6fd-45e5-bc60-e38401ad55b9","http://resolver.tudelft.nl/uuid:e3ab4443-d6fd-45e5-bc60-e38401ad55b9","Identifying enablers and relational ontology networks in design for digital fabrication","Ng, Ming Shan (ETH Zürich; Kyoto Institute of Technology); Hall, Daniel M. (TU Delft Design & Construction Management; ETH Zürich); Schmailzl, Marc (Technische Universität München; Ostbayerische Technische Hochschule Regensburg (OTH)); Linner, Thomas (Ostbayerische Technische Hochschule Regensburg (OTH)); Bock, Thomas (Technische Universität München)","","2022","As use of digital fabrication increases in architecture, engineering and construction, the industry seeks appropriate management and processes to enable the adoption during the design/planning phase. Many enablers have been identified across various studies; however, a comprehensive synthesis defining the enablers of design for digital fabrication does not yet exist. This work conducts a systematic literature review of 59 journal articles published in the past decade and identifies 140 enablers under eight categories: actors, resources, conditions, attributes, processes, artefacts, values and risks. The enablers’ frequency network is illustrated using an adjacency matrix. Through the lens of actor-network theory, the work creates a relational ontology to demonstrate the linkages between different enablers. Three examples are presented using onion diagrams: circular construction focus, business model focus and digital twin in industrialisation focus. Finally, this work discusses the intersection of relational ontology with process modelling to design future digital fabrication work routines.","Actor-Network Theory (ANT); Digital fabrication; Enablers; Process modelling; Relational ontology network","en","review","","","","","","","","","","","Design & Construction Management","","",""
"uuid:87504f01-b94a-4108-ae8e-d1a5ee565b56","http://resolver.tudelft.nl/uuid:87504f01-b94a-4108-ae8e-d1a5ee565b56","Systematic solvent screening and selection for polyhydroxyalkanoates (PHBV) recovery from biomass","Vermeer, C.M. (TU Delft BT/Environmental Biotechnology); Nielsen, Maaike (Student TU Delft); Eckhardt, Vincent (Student TU Delft); Hortensius, Matthijs (Student TU Delft); Tamis, J. (TU Delft BT/Environmental Biotechnology); Picken, S.J. (TU Delft ChemE/Advanced Soft Matter); Meesters, G.M.H. (TU Delft ChemE/Product and Process Engineering); Kleerebezem, R. (TU Delft BT/Environmental Biotechnology)","","2022","The biotechnological production of poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) derived from organic waste streams by mixed microbial communities is well established at the pilot-level. However, there is limited research on the recovery of the biopolymer from the microbial biomass, while its impact on product quality and product costs is major. When applying solvent extraction, the choice of solvent has a profound influence on many aspects of the process design. This study provides a framework to perform a systematic solvent screening for PHBV extraction. First, a database was constructed of 35 solvents that were assessed according to six different selection criteria. Then, six solvents were chosen for further experimental analysis, including 1-butanol, 2-butanol, 2-ethyl hexanol (2-EH), dimethyl carbonate (DMC), methyl isobutyl ketone (MIBK), and acetone. The main findings are that the extractions with acetone and DMC obtained the highest yields (91-95%) with reasonably high purities (93-96%), where acetone had a key advantage of the possibility to use water as anti-solvent. Moreover, the results provided new insights in the mechanisms behind PHBV extraction by pointing out that at elevated temperatures the extraction efficiency is less determined by the solvent's solubility parameters and more determined by the solvent size. Although case-specific factors play a role in the final solvent choice, we believe that this study provides a general strategy for the solvent selection process.","Biopolymers; Downstream processing; Mixed microbial communities; Polyhydroxyalkanoates; Solvent extraction; Waste-to-resources","en","journal article","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:7d129503-c891-4bb5-848c-6c4c53b729cf","http://resolver.tudelft.nl/uuid:7d129503-c891-4bb5-848c-6c4c53b729cf","Single-Pulse Estimation of Target Velocity on Planar Arrays","Kokke, C.A. (TU Delft Signal Processing Systems); Coutino, Mario (TU Delft Microwave Technology and Systems for Radar; TU Delft Signal Processing Systems; DIANA FEA); Heusdens, R. (TU Delft Signal Processing Systems; Netherlands Defence Academy); Leus, G.J.T. (TU Delft Signal Processing Systems); Anitori, L. (TU Delft Microwave Technology and Systems for Radar; TU Delft Atmospheric Remote Sensing; DIANA FEA)","","2022","Doppler velocity estimation in pulse-Doppler radar is done by evaluating the target returns of bursts of pulses. While this provides convenience and accuracy, it requires multiple pulses. In adaptive and cognitive radar systems, the ability to adapt on consecutive pulses, instead of bursts, brings potential performance benefits. Hence, with radar transceiver arrays growing increasingly larger in their number of elements over the years, it may be time to re-evaluate how Doppler velocity can be estimated when using large planar arrays. In this work, we present variance bounds on the estimation of velocity using the Doppler shift as it appears in the array model. We also propose an efficient method of performing the velocity estimation and we verify its performance using Monte Carlo simulations.","array signal processing; Cramér-Rao bound; Doppler processing; pulse-Doppler radar; velocity estimation","en","conference paper","European Signal Processing Conference, EUSIPCO","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-24","","","Signal Processing Systems","","",""
"uuid:7b334e72-1230-4dbe-8973-d1facf647b9b","http://resolver.tudelft.nl/uuid:7b334e72-1230-4dbe-8973-d1facf647b9b","HyEnA: A Hybrid Method for Extracting Arguments from Opinions","van der Meer, M.T. (TU Delft Interactive Intelligence; Universiteit Leiden); Liscio, E. (TU Delft Interactive Intelligence); Jonker, C.M. (TU Delft Interactive Intelligence; Universiteit Leiden); Plaat, Aske (Universiteit Leiden); Vossen, Piek (Computational Lexicology and Terminology Lab (CLTL)); Murukannaiah, P.K. (TU Delft Interactive Intelligence)","Schlobach, Stefan (editor); Perez-Ortiz, Maria (editor); Tielman, Myrthe (editor)","2022","The key arguments underlying a large and noisy set of opinions help understand the opinions quickly and accurately. Fully automated methods can extract arguments but (1) require large labeled datasets and (2) work well for known viewpoints, but not for novel points of view. We propose HyEnA, a hybrid (human + AI) method for extracting arguments from opinionated texts, combining the speed of automated processing with the understanding and reasoning capabilities of humans. We evaluate HyEnA on three feedback corpora. We find that, on the one hand, HyEnA achieves higher coverage and precision than a state-of-the-art automated method, when compared on a common set of diverse opinions, justifying the need for human insight. On the other hand, HyEnA requires less human effort and does not compromise quality compared to (fully manual) expert analysis, demonstrating the benefit of combining human and machine intelligence.","argument extraction; hybrid intelligence; natural language processing","en","conference paper","IOS Press","","","","","","","","","","Interactive Intelligence","","",""
"uuid:9aab5c5e-d3af-485a-bcee-857171175aa9","http://resolver.tudelft.nl/uuid:9aab5c5e-d3af-485a-bcee-857171175aa9","Improved Direction Finding Accuracy for A Limited Number of Antenna Elements with Harmonic Characteristic Analysis","Yuan, S. (TU Delft Microwave Sensing, Signals & Systems); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2022","A direction-finding approach for arrays with a limited number of antenna elements has been investigated. A method based on the harmonic analysis of the received signal has been proposed to solve it. The angle estimation accuracy has been improved by angle searching and peak detection. The proposed method is theoretically described and numerical simulations are provided to verify its effectiveness. Compared with classical direction-finding methods with limited antenna elements, significant improvements have been demonstrated.","array signal processing; direction on arrival (DOA); Harmonic Characteristic analysis; limited antenna elements","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-05-01","","","Microwave Sensing, Signals & Systems","","",""
"uuid:fcc28e65-c75c-4aa1-88d1-2d1c3ae88ad8","http://resolver.tudelft.nl/uuid:fcc28e65-c75c-4aa1-88d1-2d1c3ae88ad8","Designing Hybrid Intelligence Techniques for Facilitating Collaboration Informed by Social Science","Matej Hrkalovic, T. (TU Delft Pattern Recognition and Bioinformatics; Vrije Universiteit Amsterdam)","","2022","Designing (socially) intelligent systems for facilitating collaborations in human-human and human-AI teams will require them to have a basic understanding of principles underlying social decision-making. Partner selection - the ability to identify and select suitable partners for collaborative relationships - is one relevant component of social intelligence and an important ingredient for successful relationship management. In everyday life, decision to engage in joint undertakings are often based on impressions made during social interactions with potential partners. These impressions, and consequently, partner selection are informed by (non)-verbal behavioral cues. Despite its importance, research investigating how these impressions and partner selection decisions unfold in naturalistic settings seem to be lacking. Thus, in this paper, we present a project focused on understanding, predicting and modeling partner selection and understanding its relationship with human impressions in semi- naturalistic settings, such as social interactions, with the aim of informing future designing approaches of (hybrid) intelligence system that can understand, predict and aid in initiating and facilitating (current and future) collaborations.","Collaboration; Impression formation; Partner Selection; Social Signal Processing; User-modelling","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:c9911c29-eb14-4d05-9130-a055ef215b7a","http://resolver.tudelft.nl/uuid:c9911c29-eb14-4d05-9130-a055ef215b7a","Design Feasibility of an Energy-efficient Wrist Flexion-Extension Exoskeleton using Compliant Beams and Soft Actuators","Amoozandeh, A. (TU Delft Mechatronic Systems Design); Caasenbrood, Brandon (Eindhoven University of Technology)","","2022","Passive and active exoskeletons have been used over recent decades. However, regarding many physiological systems, we see that the majority explore both active and passive elements to minimize energy consumption while retaining proper motion control. In light of this, we propose a design that combines compliant mechanisms as passive support for gravity balancing of the hand's weight and soft actuators as active support for wrist flexion-extension. Our approach offers a safe, lightweight solution that intrinsically complements and supports the wrist's degrees of freedom. We hypothesize that the proposed soft wearable device is able to increase the range of motion and reduce muscle fatigue while being energy-conservative by balancing of the passive and active subsystems. In this work, we perform a design feasibility study for such soft wrist exoskeletons, particularly focused on wrist flexion-extension rehabilitation. Through optimization, geometries for the required functionality of the compliant beam and soft actuator are obtained, and their performance as separate subsystems is evaluated by simulations and experiments. Under the appropriate inputs, we show that the system can introduce a controllable bifurcation. Through experiments, we investigate such bi-stability and explore its usefulness for rehabilitative support of wrist flexion-extension. In short, the proposed wearable can offer a viable, energy-efficient alternative to traditional rehabilitation technologies.","Wrist; Performance evaluation; Actuators; Energy consumption; Manufacturing processes; Wearable computers; Exoskeletons","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Mechatronic Systems Design","","",""
"uuid:457e1062-732d-4574-9197-1cf8c4439365","http://resolver.tudelft.nl/uuid:457e1062-732d-4574-9197-1cf8c4439365","Flowsheet generation through hierarchical reinforcement learning and graph neural networks","Stops, L. (TU Delft ChemE/Product and Process Engineering); Leenhouts, Roel (Student TU Delft); Gao, Q. (TU Delft ChemE/Product and Process Engineering); Schweidtmann, A.M. (TU Delft ChemE/Product and Process Engineering)","","2022","Process synthesis experiences a disruptive transformation accelerated by artificial intelligence. We propose a reinforcement learning algorithm for chemical process design based on a state-of-the-art actor-critic logic. Our proposed algorithm represents chemical processes as graphs and uses graph convolutional neural networks to learn from process graphs. In particular, the graph neural networks are implemented within the agent architecture to process the states and make decisions. We implement a hierarchical and hybrid decision-making process to generate flowsheets, where unit operations are placed iteratively as discrete decisions and corresponding design variables are selected as continuous decisions. We demonstrate the potential of our method to design economically viable flowsheets in an illustrative case study comprising equilibrium reactions, azeotropic separation, and recycles. The results show quick learning in discrete, continuous, and hybrid action spaces. The method is predestined to include large action-state spaces and an interface to process simulators in future research.","artificial intelligence; graph convolutional neural networks; graph generation; process synthesis; reinforcement learning","en","journal article","","","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:1d9a4741-2932-4e48-8d36-b16bcc247605","http://resolver.tudelft.nl/uuid:1d9a4741-2932-4e48-8d36-b16bcc247605","Design of a microfluidic mixer channel: First steps into creating a fluorescent dye-based biosensor for mAb aggregate detection","Neves Sao Pedro, M. (TU Delft BT/Bioprocess Engineering); Silva dos Santos, M. (TU Delft BT/Bioprocess Engineering); Eppink, Michel H.M. (Wageningen University & Research; Byondis B.V., Nijmegen); Ottens, M. (TU Delft BT/Design and Engineering Education)","","2022","A major challenge in the transition to continuous biomanufacturing is the lack of process analytical technology (PAT) tools which are able to collect real-time information on the process and elicit a response to facilitate control. One of the critical quality attributes (CQAs) of interest during monoclonal antibodies production is aggregate formation. The development of a real-time PAT tool to monitor aggregate formation is then crucial to have immediate feedback and process control. Miniaturized sensors placed after each unit operation can be a powerful solution to speed up an analytical measurement due to their characteristic short reaction time. In this work, a micromixer structure capable of mixing two streams is presented, to be employed in the detection of mAb aggregates using fluorescent dyes. Computational fluid dynamics (CFD) simulations were used to compare the mixing performance of a series of the proposed designs. A final design of a zigzag microchannel with 45° angle was reached and this structure was subsequently fabricated and experimentally validated with colour dyes and, later, with a FITC-IgG molecule. The designed zigzag micromixer presents a mixing index of around 90%, obtained in less than 30 seconds. Therefore, a micromixer channel capable of a fast and efficient mixing is hereby demonstrated, to be used as a real-time PAT tool for a fluorescence based detection of protein aggregation.","computational fluid dynamics; continuous biomanufacturing; microfluidics; Process Analytical Technology (PAT); protein aggregation","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:6ffecad6-4a1a-4295-98db-9a12b346921a","http://resolver.tudelft.nl/uuid:6ffecad6-4a1a-4295-98db-9a12b346921a","Bernstein-von Mises theorem for the Pitman-Yor process of nonnegative type","Franssen, S.E.M.P. (TU Delft Statistics); van der Vaart, A.W. (TU Delft Statistics)","","2022","The Pitman-Yor process is a random probability distribution, that can be used as a prior distribution in a nonparametric Bayesian analy-sis. The process is of species sampling type and generates discrete distribu-tions, which yield of the order nσ different values (“species”) in a random sample of size n, ifthetypeσ is positive. Thus this type parameter can be set to target true distributions of various levels of discreteness, making the Pitman-Yor process an interesting prior in this case. It was previously shown that the resulting posterior distribution is consistent if and only if the true distribution of the data is discrete. In this paper we derive the dis-tributional limit of the posterior distribution, in the form of a (corrected) Bernstein-von Mises theorem, which previously was known only in the con-tinuous, inconsistent case. It turns out that the Pitman-Yor posterior distribution has good behaviour if the true distribution of the data is discrete with atoms that decrease not too slowly. Credible sets derived from the posterior distribution provide valid frequentist confidence sets in this case. For a general discrete distribution, the posterior distribution, although con-sistent, may contain a bias which does not converge to zero at the√n rate and invalidates posterior inference. We propose a bias correction that solves this problem. We also consider the effect of estimating the type parameter from the data, both by empirical Bayes and full Bayes methods. In a small simulation study we illustrate that without bias correction the coverage of credible sets can be arbitrarily low, also for some discrete distributions.","Bernstein-von Mises theo-rem; credible set; empirical Bayes; Pitman-Yor process; species sampling; weak convergence","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:7c3d0b09-869f-452f-abbc-8bd363cd03c8","http://resolver.tudelft.nl/uuid:7c3d0b09-869f-452f-abbc-8bd363cd03c8","Developing a barrier management framework for dealing with Natech domino effects and increasing chemical cluster resilience","Zeng, Tao (South China University of Technology; Guangdong Provincial Science and Technology Collaborative Innovation Center for Work Safety; Katholieke Universiteit Leuven); Chen, Guohua (South China University of Technology; Guangdong Provincial Science and Technology Collaborative Innovation Center for Work Safety); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Katholieke Universiteit Leuven; Universiteit Antwerpen); Men, Jinkun (South China University of Technology; Guangdong Provincial Science and Technology Collaborative Innovation Center for Work Safety)","","2022","A domino effect triggered by a natural event (a so-called Natech domino effect) represents a typical high-impact low-probability (HILP) event, which may lead to catastrophic consequences. The presence of safety barriers could have an impact on the effects by impeding propagation patterns and mitigating potential consequences. However, coordinating and maintaining safety measures to establish an effective barrier system against Natech domino effects is complicated. In this paper, the concept of what constitutes a safety barrier and the principles of barrier management are reviewed. Subsequently, the complex phenomenon of Natech domino effects is studied at the individual installation level, while the propagation pattern is explored at the system level. The application of safety barriers is discussed with the aim of coping with potential Natech domino effects. A systematic framework of barrier management is developed to establish and improve the barrier system in the whole cycle (design & construction, operation, accident, recovery & improvement) of a chemical industrial area. The challenges are discussed to highlight future study needs.","Barrier management; Chemical industry; Natech domino effect; Prevention; Process industry; Risk","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Safety and Security Science","","",""
"uuid:d2b8892d-1a48-4c31-8c72-017607a8ffd0","http://resolver.tudelft.nl/uuid:d2b8892d-1a48-4c31-8c72-017607a8ffd0","Conditional empirical copula processes and generalized measures of association","Derumigny, Alexis (TU Delft Statistics); Fermanian, Jean David (CREST-ENSAE)","","2022","We study the weak convergence of conditional empirical copula processes indexed by general families of conditioning events that have non zero probabilities. Moreover, we also study the case where the conditioning events are chosen in a data-driven way. The validity of several bootstrap schemes is stated, including the exchangeable bootstrap. We define general multivariate measures of association, possibly given some fixed or random conditioning events. By applying our theoretical results, we prove the asymptotic normality of the estimators of such measures. We illustrate our results with financial data.","and phrases: Empirical copula process; bootstrap; conditional copula; weak convergence","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:5b8c2580-a4e1-4c9e-a244-505f0cb8794b","http://resolver.tudelft.nl/uuid:5b8c2580-a4e1-4c9e-a244-505f0cb8794b","The 4th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild (MSECP-Wild)","Dudzik, B.J.W. (TU Delft Pattern Recognition and Bioinformatics); Küster, Dennis (University of Bremen); St-Onge, David (Ecole de Technologie Superieure (ETS)); Putze, Felix (University of Bremen)","","2022","The ability to automatically infer relevant aspects of human users' thoughts and feelings is crucial for technologies to adapt their behaviors in complex interactions intelligently (e.g., social robots or tutoring systems). Research on multimodal analysis has demonstrated the potential of technology to provide such estimates for a broad range of internal states and processes. However, constructing robust enough approaches for deployment in real-world applications remains an open problem. The MSECP-Wild workshop series serves as a multidisciplinary forum to present and discuss research addressing this challenge. This 4th iteration focuses on addressing varying contextual conditions (e.g., throughout an interaction or across different situations and environments) in intelligent systems as a crucial barrier for more valid real-world predictions and actions. Submissions to the workshop span efforts relevant to multimodal data collection and context-sensitive modeling. These works provide important impulses for discussions of the state-of-the-art and opportunities for future research on these subjects.","Affective Computing; Context-awareness; Multimodal Data; Social Signal Processing; Ubiquitous Computing; User-Modeling","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:915744c3-4ae8-4f31-bfe8-8a336759967a","http://resolver.tudelft.nl/uuid:915744c3-4ae8-4f31-bfe8-8a336759967a","Exploring the Detection of Spontaneous Recollections during Video-viewing In-the-Wild using Facial Behavior Analysis","Dudzik, B.J.W. (TU Delft Pattern Recognition and Bioinformatics); Hung, H.S. (TU Delft Pattern Recognition and Bioinformatics)","","2022","Intelligent systems might benefit from automatically detecting when a stimulus has triggered a user's recollection of personal memories, e.g., to identify that a piece of media content holds personal significance for them. While computational research has demonstrated the potential to identify related states based on facial behavior (e.g., mind-wandering), the automatic detection of spontaneous recollections specifically has not been investigated this far. Motivated by this, we present machine learning experiments exploring the feasibility of detecting whether a video clip has triggered personal memories in a viewer based on the analysis of their Head Rotation, Head Position, Eye Gaze, and Facial Expressions. Concretely, we introduce an approach for automatic detection and evaluate its potential for predictions using in-the-wild webcam recordings. Overall, our findings demonstrate the capacity for above chance detections in both settings, with substantially better performance for the video-independent variant. Beyond this, we investigate the role of person-specific recollection biases for predictions of our video-independent models and the importance of specific modalities of facial behavior. Finally, we discuss the implications of our findings for detecting recollections and user-modeling in adaptive systems.","Affective Computing; Cognitive Processing; Facial Behavior Analysis; Memories; Mind-Wandering; Recollection; User-Modeling","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:4d278dc4-fc88-4bdc-abb2-adf7c128eb16","http://resolver.tudelft.nl/uuid:4d278dc4-fc88-4bdc-abb2-adf7c128eb16","Novel sediment sampling method provides new insights into vertical grain size variability due to marine and aeolian beach processes","van IJzendoorn, Christa (TU Delft Coastal Engineering); Hallin, E.C. (TU Delft Coastal Engineering; Lund University); Cohn, Nicholas (U.S. Army Engineer Research and Development Center); Reniers, A.J.H.M. (TU Delft Environmental Fluid Mechanics); de Vries, S. (TU Delft Coastal Engineering)","","2022","In sandy beach systems, the aeolian sediment transport can be governed by the vertical structure of the sediment layers at the bed surface. Here, data collected with a newly developed sand scraper is presented to determine high-resolution vertical grain size variability and how it is affected by marine and aeolian processes. Sediment samples at up to 2 mm vertical resolution down to 50 mm depth were collected at three beaches: Waldport (Oregon, USA), Noordwijk (the Netherlands) and Duck (North Carolina, USA). The results revealed that the grain size in individual layers can differ considerably from the median grain size of the total sample. The most distinct temporal variability occurred due to marine processes that resulted in significant morphological changes in the intertidal zone. The marine processes during high water resulted both in fining and coarsening of the surface sediment. Especially near the upper limit of wave runup, the formation of a veneer of coarse sediment was observed. Although the expected coarsening of the near-surface grain size during aeolian transport events was observed at times, the opposite trend also occurred. The latter could be explained by the formation and propagation of aeolian bedforms within the intertidal zone locally resulting in sediment fining at the bed surface. The presented data lays the basis for future sediment sampling strategies and sediment transport models that investigate the feedbacks between marine and aeolian transport, and the vertical variability of the grain size distribution.","aeolian; coastal geomorphology; coastal processes; grain size; intertidal beach; sediment availability; sediment transport","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:c8de2a2d-4349-4a93-927c-31c79501d91e","http://resolver.tudelft.nl/uuid:c8de2a2d-4349-4a93-927c-31c79501d91e","Robust Algorithm for Signal Digital Detection on the Background of Non-Gaussian Passive Interferences","Ianovskyi, F. (TU Delft Atmospheric Remote Sensing; National Aviation University); Prokopenko, Igor (National Aviation University); Pitertsev, Alexander (National Aviation University); Rhee, Huinam (Sunchon National University); Dmytruk, Anastasiia (National Aviation University)","Kolosovs, Deniss (editor)","2022","This paper proposes generalized mathematical model of different passive interferences and develops an effective algorithm of digital signal processing for detection on the background of them. Models of interferences as random process of K-distribution is used with parametrization for the unwanted reflections from atmosphere, land, and sea. Robust algorithm for signal detection on the background of such interferences, in particular in case of non-gaussian distribution, is developed. Its effectiveness is researched and confirmed.","algorithm analysis; algorithm design; clutter; digital signal processing; radar detection; ranking","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Atmospheric Remote Sensing","","",""
"uuid:a022fe47-9a1f-40f1-9e06-ea0938a86731","http://resolver.tudelft.nl/uuid:a022fe47-9a1f-40f1-9e06-ea0938a86731","Recognizing non-native spoken words in background noise increases interference from the native language","Hintz, Florian (Max Planck Institute for Psycholinguistics); Voeten, Cesko C. (Fryske Akademy - KNAW); Scharenborg, O.E. (TU Delft Multimedia Computing)","","2022","Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition – especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one’s native language during non-native spoken-word recognition under adverse conditions.","Bilingual processing; Eye-tracking; Onset competition","en","journal article","","","","","","","","","","","Multimedia Computing","","",""
"uuid:1e6d5844-9b25-45a9-ac15-c2166c6b1d50","http://resolver.tudelft.nl/uuid:1e6d5844-9b25-45a9-ac15-c2166c6b1d50","The impact of clastic syn-sedimentary compaction on fluvial-dominated delta morphodynamics","Valencia, A.A. (TU Delft Applied Geology; Universitas Indonesia); Storms, J.E.A. (TU Delft Applied Geology); Walstra, D.J.R. (TU Delft Coastal Engineering; Deltares); van der Vegt, Helena (Deltares); Jagers, Hendrik R.A. (Deltares)","","2022","In natural deltaic settings, mixed hydrodynamic forcings and sediment properties are known to influence the preserved delta deposits. One process that has not received much attention yet is syn-sedimentary compaction of clastic sediment on millennial-scale delta evolution. To study how compaction interacts with delta morphodynamics and preserved sediment, a modelling approach is proposed. A 1D grain-size dependent compaction model was implemented into Delft3D-FLOW, which provides an opportunity to understand the underexplored connection between grain sizes supplied to the deltas and sediment compaction. The compaction model allows deposited sediment to decrease in volume due to the accumulation of newly deposited sediments above or the elapsed time. Differences in morphological trends are presented for scenarios defined by the composition of sediment supply (mud rich and sand rich) and the maximum allowed compaction rate in the model (0–10 mm year−1). The resultant deposits are classified into sub-environments: delta top, delta front and pro delta. The delta top geometry (e.g. area increase, rugosity and aspect ratio), sediment distribution alongshore and across sub-environments, and delta top accommodation (e.g. volume reduction and average water depth) are compared. The modelling results show that compaction of the underlying delta front and pro delta deposits increases the average water depth at the delta top, driving morphological variability observed in the mud-rich and sand-rich deltas. The morphological changes are more prominent in the mud-rich deltas, which experience larger compaction-induced volume reduction for the same scenario. Moreover, higher compaction rates further increase the delta top accommodation, resulting in more deposition and evenly distributed sediment at the delta top. This leads to a less significant area increase and a wider delta top with a smoother coastline. The presented modelling results bridge the knowledge gap on the influence of syn-sedimentary compaction on long-term delta morphodynamics and preserved sediment. These findings can be applied to unravel the controlling processes in ancient delta deposits and predict the evolution of modern systems under changing climates.","accommodation; delta morphology; preserved sediment; process-based forward models; syn-sedimentary compaction","en","journal article","","","","","","","","","","","Applied Geology","","",""
"uuid:d0435cf9-7bc5-488c-b838-b1e9835127d2","http://resolver.tudelft.nl/uuid:d0435cf9-7bc5-488c-b838-b1e9835127d2","Microstructural basis for improved corrosion resistance and mechanical properties of fabricated ultra-fine grained Mg-Akermanite composites","Mehdizade, M. (Iran University of Science and Technology); Eivani, A.R. (Iran University of Science and Technology); Tabatabaei, F. (Iran University of Science and Technology); Jafarian, H. R. (Iran University of Science and Technology); Zhou, J. (TU Delft Biomaterials & Tissue Biomechanics)","","2022","In the present research, a composite with a magnesium alloy (WE43) as the matrix and Akermanite as the bioactive and reinforcing agent was fabricated by friction stir processing (FSP), resulting in a microstructure with uniformly distributed fine grains, second-phase particles and micro-sized Akermanite particles. The effect of an addition of Akermanite to the alloy on the mechanical properties and corrosion resistance of the resulting composite was investigated. The compressive strength and ductility of the composite were found to be significantly higher than those of the monolithic WE43 alloy. The value of yield strength of the WE43 sample increased from 75 MPa up to 119 and 225 MPa for WE43-6P and WE43-A-6P samples, respectively. Also, the value of the ultimate compressive strength of the WE43 sample increased from 210 MPa up to 240 and 362 MPa for WE43-6P and WE43-A-6P samples, respectively. The value of elongation for WE43, WE43-6P, and WE43-A-6P samples were 4.5%, 16%, and 22%, respectively. The EIS test showed that the corrosion mechanism of WE43 sample is a combination of localized pitting and uniform corrosion, which shifted towards more uniform corrosion with higher corrosion resistance by applying FSP and adding Akermanite powder. The potentiodynamic polarization and in vitro immersion tests confirmed this finding, as evidenced by the increase in polarization resistance from 0.192 for the monolithic WE43 alloy up to 0.339 and 0.609 kΩ/cm2 for WE43-6P and WE43-A-6P samples, respectively. The mass loss rate of the WE43 sample decreased from 20.82 to 10.13 mm per year for the WE43-A-6P sample after 312 h immersion in SBF solution. All tests approved that by applying FSP and adding Akermanite to WE43, the corrosion resistance in the SBF solution could be significantly enhanced.","Akermanite; Composite; Corrosion; Friction stir processing; Magnesium","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:78c7aef8-8e0b-4049-8aa6-64d5b61be3a4","http://resolver.tudelft.nl/uuid:78c7aef8-8e0b-4049-8aa6-64d5b61be3a4","Multi-objective design of aircraft maintenance using Gaussian process learning and adaptive sampling","Lee, J. (TU Delft Air Transport & Operations); Mitici, M.A. (TU Delft Air Transport & Operations)","","2022","Aircraft maintenance design aims to identify strategies that render the aircraft reliable for flight in a cost-efficient manner. These are often conflicting objectives. Moreover, existing studies on maintenance design often limit themselves to only one type of maintenance strategy, overlooking other potentially dominating designs. We propose a framework for aircraft maintenance design with explicit reliability and cost-efficiency objectives. We explore the design space of a variety of maintenance strategies ranging from traditional time-based maintenance to predictive maintenance. To explore this design space, we propose an adaptive algorithm using Gaussian process learning and a novel adaptive sampling method. Gaussian process learning models rapidly pre-evaluate new maintenance designs, while adaptive sampling selects for further exploration only those designs that are expected to improve the available Pareto front of maintenance designs. This framework is illustrated for the maintenance of multi-component aircraft systems with k-out-of-n redundancy. The results show that novel predictive maintenance designs based on Remaining-Useful-Life prognostics dominate other maintenance designs, especially in the knee region of the obtained Pareto front, where the most beneficial balance between conflicting objectives is achieved. Our proposed exploration algorithm also outperforms other state-of-the-art exploration algorithms with respect to the quality of the Pareto front obtained.","Aircraft maintenance; Design space exploration; Efficiency; Gaussian process learning; Multi-objective design; Predictive maintenance; Reliability","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:20c15493-c789-476a-ae45-3080157c6eeb","http://resolver.tudelft.nl/uuid:20c15493-c789-476a-ae45-3080157c6eeb","A Pitch-Matched Transceiver ASIC With Shared Hybrid Beamforming ADC for High-Frame-Rate 3-D Intracardiac Echocardiography","Hopf, Y.M. (TU Delft Electronic Instrumentation); Ossenkoppele, Boudewine W. (Thoraxcenter); Soozande, Mehdi (Thoraxcenter); Noothout, E.C. (TU Delft ImPhys/Medical Imaging); Chang, Z.Y. (TU Delft Electronic Instrumentation); Chen, Chao (Thoraxcenter); Vos, H.J. (TU Delft ImPhys/Medical Imaging); Bosch, Johan G. (Thoraxcenter); Verweij, M.D. (TU Delft ImPhys/Medical Imaging); de Jong, N. (TU Delft ImPhys/Medical Imaging); Pertijs, M.A.P. (TU Delft Electronic Instrumentation)","","2022","In this article, an application-specific integrated circuit (ASIC) for 3-D, high-frame-rate ultrasound imaging probes is presented. The design is the first to combine element-level, high-voltage (HV) transmitters and analog front-ends, subarray beamforming, and in-probe digitization in a scalable fashion for catheter-based probes. The integration challenge is met by a hybrid analog-to-digital converter (ADC), combining an efficient charge-sharing successive approximation register (SAR) first stage and a compact single-slope (SS) second stage. Application in large ultrasound imaging arrays is facilitated by directly interfacing the ADC with a charge-domain subarray beamformer, locally calibrating interstage gain errors and generating the SAR reference using a power-efficient local reference generator. Additional hardware-sharing between neighboring channels ultimately leads to the lowest reported area and power consumption across miniature ultrasound probe ADCs. A pitch-matched design is further enabled by an efficient split between the core circuitry and a periphery block, the latter including a datalink performing clock data recovery (CDR) and time-division multiplexing (TDM), which leads to a 12-fold total channel count reduction. A prototype of $8{\times }9$ elements was fabricated in a TSMC 0.18- $\mu \text{m}$ HV BCD technology and a 2-D PZT transducer matrix with a pitch of $160 \mu \text{m}$ , and a center frequency of 6 MHz was manufactured on the chip. The imaging device operates at up to 1000 volumes/s, generates 65-V transmit pulses, and has a receive power consumption of only 1.23 mW/element. The functionality has been demonstrated electrically as well as in acoustic and imaging experiments.","3-D ultrasound; Array signal processing; Catheters; high-frame-rate; high-voltage (HV) transmitter; hybrid analog-to-digital converter (ADC); Imaging; intracardiac echocardiography (ICE); Probes; subarray beamforming; successive approximation register (SAR)/single-slope (SS) ADC; Transducers; Transmitters; Ultrasonic imaging; ultrasound application-specific integrated circuit (ASIC)","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Electronic Instrumentation","","",""
"uuid:9fb77013-6acb-4f12-b26c-883aa1cbb588","http://resolver.tudelft.nl/uuid:9fb77013-6acb-4f12-b26c-883aa1cbb588","A Pitch-Matched ASIC with Integrated 65V TX and Shared Hybrid Beamforming ADC for Catheter-Based High-Frame-Rate 3D Ultrasound Probes","Hopf, Y.M. (TU Delft Electronic Instrumentation); Ossenkoppele, B.W. (TU Delft ImPhys/Medical Imaging); Soozande, M. (Erasmus MC); Noothout, E.C. (TU Delft ImPhys/Medical Imaging); Chang, Z.Y. (TU Delft Electronic Instrumentation); Chen, C. (TU Delft Electronic Instrumentation); Vos, H.J. (TU Delft ImPhys/Medical Imaging; Erasmus MC); Bosch, J.G. (Erasmus MC); Verweij, M.D. (TU Delft ImPhys/Medical Imaging; Erasmus MC); de Jong, N. (TU Delft ImPhys/Medical Imaging; Erasmus MC); Pertijs, M.A.P. (TU Delft Electronic Instrumentation)","Fujino, Laura C. (editor)","2022","Intra-cardiac echography (ICE) probes (Fig. 32.2.1) are widely used in electrophysiology for their good procedure guidance and relatively safe application. ASICs are increasingly employed in these miniature probes to enhance signal quality and reduce the number of connections needed in mm-diameter catheters [1]-[5]. 3D visualization in real-time is additionally enabled by 2D transducer arrays with, for each transducer element, a high-voltage (HV) transmit (TX) part, to generate acoustic pulses of sufficient pressure, and a receive (RX) path, to process the resulting echoes. To achieve the required reduction in RX channels, micro-beamforming (BF), which merges the signals from a subarray using a delay-and-sum operation, has been shown to be an effective solution [3], [4]. However, due to the frame-rate reduction that is associated with BF, these designs cannot serve emerging high-frame-rate imaging modes (1000 volumes/s) like 3D blood-flow and elastography imaging. In-probe digitization has recently been investigated to provide further channel-count reduction, make data transmission more robust, and enable pre-processing in the probe [1]-[3]. However, these earlier designs have either no TX functionality [2], [3] or only low-voltage (LV) TX [1] integrated. Combining BF and digitization with area-hungry HV transmitters in a pitch-matched scalable fashion while supporting high-frame-rate imaging remains an unmet challenge. The work presented in this paper meets this target, enabled by a hybrid ADC, the small die size of which allows for co-integration with 65V element-level pulsers.","Low voltage; Three-dimensional displays; Transducers; Ultrasonic imaging; Array signal processing; Transmitters; Imaging","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Electronic Instrumentation","","",""
"uuid:c22ae697-556e-4b7a-9fa3-d58daf877fee","http://resolver.tudelft.nl/uuid:c22ae697-556e-4b7a-9fa3-d58daf877fee","Highly Compact Partial Power Converter for a Highly Efficient PV-BESS Stacked Generation System","Granello, P. (Sapienza University of Rome); Schirone, Luigi (Sapienza University of Rome); Bauer, P. (TU Delft DC systems, Energy conversion & Storage); Miceli, Rosario (Università degli Studi di Palermo); Pellitteri, Filippo (Università degli Studi di Palermo)","","2022","The inherently intermittent nature of photovoltaic (PV) energy has brought increasing interest towards the integration between PV sources and Battery Energy Storage Systems (BESS). In this paper, a Series Partial Power Processing (PPP) converter based on Capacitive Power Transfer (CPT) is proposed to integrate PV and BESS in a grid-connected inverter system. The proposed converter has been simulated according to a PV string capable to provide 1430 W under full irradiance conditions, a BESS nominal voltage equal to 215 V and a solar inverter assumed to operate with a minimum voltage of 150 V and a maximum current of 10 A. Simulation tests carried out at different conditions of solar radiation and required load power aim at demonstrating the correct operation of the proposed system.","DC/DC Power Conversion; Partial Power Processing; Switched Capacitor; Capacitive Power Transfer; Battery Energy Storge Systems; Photovoltaic","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","DC systems, Energy conversion & Storage","","",""
"uuid:b61bcab1-d64d-4138-9c01-faefcc729b1c","http://resolver.tudelft.nl/uuid:b61bcab1-d64d-4138-9c01-faefcc729b1c","Efficient Circuits for Permuting and Mapping Packed Values Across Leveled Homomorphic Ciphertexts","Vos, J.V. (TU Delft Cyber Security); Vos, D.A. (TU Delft Cyber Security); Erkin, Z. (TU Delft Cyber Security)","Atluri, Vijayalakshmi (editor); Di Pietro, Roberto (editor); Jensen, Christian D. (editor); Meng, Weizhi (editor)","2022","Cloud services are an essential part of our digital infrastructure as organizations outsource large amounts of data storage and computations. While organizations typically keep sensitive data in encrypted form at rest, they decrypt it when performing computations, leaving the cloud provider free to observe the data. Unfortunately, access to raw data creates privacy risks. To alleviate these risks, researchers have developed secure outsourced data processing techniques. Such techniques enable cloud services that keep sensitive data encrypted, even during computations. For this purpose, fully homomorphic encryption is particularly promising, but operations on ciphertexts are computationally demanding. Therefore, modern fully homomorphic cryptosystems use packing techniques to store and process multiple values within a single ciphertext. However, a problem arises when packed data in one ciphertext does not align with another. For this reason, we propose a method to construct circuits that perform arbitrary permutations and mappings of such packed values. Unlike existing work, our method supports moving values across multiple ciphertexts, considering that the values in real-world scenarios cannot all be packed within a single ciphertext. We compare our open-source implementation against the state-of-the-art method implemented in HElib, which we adjusted to work with multiple ciphertexts. When data is spread among five or more ciphertexts, our method outperforms the existing method by more than an order of magnitude. Even when we only consider a permutation within a single ciphertext, our method still outperforms the state-of-the-art works implemented by HElib for circuits of similar depth.","Applied cryptography; Data packing; Fully homomorphic encryption; Secure outsourced data processing","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-01","","","Cyber Security","","",""
"uuid:5936cf9e-101d-4708-8a88-c79338fa03b9","http://resolver.tudelft.nl/uuid:5936cf9e-101d-4708-8a88-c79338fa03b9","Fuel cells systems for sustainable ships","van Biert, L. (TU Delft Ship Design, Production and Operations); Visser, K. (TU Delft Ship Design, Production and Operations)","Baldi, Francesco (editor); Coraddu, Andrea (editor); Mondejar, Maria E. (editor)","2022","As shipping is setting sail for a sustainable future, the application of fuel cells is increasingly regarded as a promising technology to reduce or fully eliminate emissions. Fuel cells convert the chemical energy in fuels directly into electricity, achieving high efficiencies while emitting no hazardous compounds and producing little noise and vibrations. This chapter provides an overview of different fuel cell systems and discusses the various fuel cell types, working principles and characteristics. Particular attention is given to the low and high temperature polymer electrolyte membrane fuel cell and solid oxide fuel cell, as these are often considered to hold most potential for application in ships. The application of fuel cells is not restricted to the use of pure hydrogen, thus an overview of relevant fuel processing and purification technologies is provided as well. Operational aspects including electrical efficiency, part load performance, load transients, system start-up, heat recovery and combined cycle operation are introduced and subsequently discussed in the context of maritime application. Aspects related to ship design and operation, emission regulation compliance, reliability, availability, maintenance, safety and economics are briefly considered. Finally, an overview of relevant maritime experience and a future outlook are provided.","Fuel processing; Operation; PEMFC; Ships; SOFC","en","book chapter","Elsevier","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Ship Design, Production and Operations","","",""
"uuid:bb73c05f-fd29-4e11-97a6-5a2b46cce0ff","http://resolver.tudelft.nl/uuid:bb73c05f-fd29-4e11-97a6-5a2b46cce0ff","Nano-modification in digital manufacturing of cementitious composites","França de Mendonça Filho, F. (TU Delft Materials and Environment); Chen, Y. (TU Delft Materials and Environment); Copuroglu, Oguzhan (TU Delft Materials and Environment)","Sahmaran, Mustafa (editor); Shaikh, Faiz (editor); Yildirim, Gürkan (editor)","2022","Remarkable attention from both academia and industry has been attracted to extrusion-based 3D concrete printing (3DCP) during the last decade. Many companies in the Netherlands, e.g., Royal BAM Group, CyBe, Twente Additive Manufacturing, and Bruil, are attempting to implement this technology in practice. 3DCP is the focused digital concrete manufacturing technique in this study. The development of printable cementitious composites is possibly the most critical aspect in 3DCP. Compared to mold-cast concrete process, several essential material parameters need to be controlled in 3DCP process, i.e., pumpability, extrudability, buildability, and others. Conventional materials technology appears to have limited resources to offer for further enhancing the capabilities of 3D printing. Therefore, there is a dire need for adopting non-conventional materials solutions for which nanomaterials can play a vital role. Controlling the rheology is the key to successful 3DCP, as achieving dimensional stability and the minimum required mechanical properties in green state are the main challenges. Furthermore, achieving a required strength development rate and enabling smart monitoring of the 3DCP are the other goals that are desired in designing such materials. Recent research shows that successful modification of cementitious materials can be achieved by incorporating nanomaterials in the materials design for the enhanced fresh and hardened state properties. In this chapter, a summary of these developments is compiled in the light of potential applications, safety issues, and technological challenges.","buildability assessment; Construction technologies; digital fabrication; nanomaterials; printing processes; rheology","en","book chapter","Elsevier","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-09-18","","","Materials and Environment","","",""
"uuid:43f90af3-41ea-4e78-a87a-aba42dcdf6d8","http://resolver.tudelft.nl/uuid:43f90af3-41ea-4e78-a87a-aba42dcdf6d8","A systematic approach for the processing of experimental data from anaerobic syngas fermentations","Almeida Benalcazar, E.F. (TU Delft BT/Bioprocess Engineering; University of Campinas); Noorman, H.J. (TU Delft BT/Bioprocess Engineering; DSM); Filho, Rubens Maciel (University of Campinas); Posada Duque, J.A. (TU Delft BT/Biotechnology and Society)","","2022","This study describes a methodological framework designed for the systematic processing of experimental syngas fermentation data for its use by metabolic models at pseudo-steady state and at transient state. The developed approach allows the use of not only own experimental data but also from experiments reported in literature which employ a wide range of gas feed compositions (from pure CO to a mixture between H2 and CO2), different pH values, two different bacterial strains and bioreactor configurations (stirred tanks and bubble columns). The developed data processing framework includes i) the smoothing of time-dependent concentrations data (using moving averages and statistical methods that reduce the relevance of outliers), ii) the reconciliation of net conversion rates such that mass balances are satisfied from a black-box perspective (using minimizations), and iii) the estimation of dissolved concentrations of the syngas components (CO, H2 and CO2) in the fermentation broth (using mass transfer models). Special care has been given such that the framework allows the estimation of missing or unreported net conversion data and metabolite concentrations at the intra or extracellular spaces (considering that there is availability of at least two replicate experiments) through the use of approximative kinetic equations.","data reconciliation; experimental data processing; fermentation data reconstruction; Syngas fermentation","en","book chapter","Elsevier","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","BT/Bioprocess Engineering","","",""
"uuid:b258f8a5-5484-4afd-a2f4-4984f4492038","http://resolver.tudelft.nl/uuid:b258f8a5-5484-4afd-a2f4-4984f4492038","The Secondary Use Group: Unlocking Waste as a Common Pool of Resources in the 1970s’","Medici, P. (TU Delft Theory, Territories & Transitions)","Bruyns, Gerhard (editor); Kousoulas, Stavros (editor)","2022","Today, the evident need for more efficient conservation, management and redistribution of natural and human-made common resources have inspired thinkers, researchers, and designers to redefine the organization of our societies. For example, Silke Helfrich and David Bollier argue that the common-pool resources (CPR) defined by Elinor Ostrom require new “practices of commoning” that reconsider the conventional discourses of market economy and state intervention. Several contemporary architectural firms have introduced innovative design strategies concerning the collective collection and reuse of local materials, the commons and the circular economy.
However, already after the oil crisis in the early 1970s, practices like the Secondary Reuse Group (SUG) engaged with circular reuse of materials but did not correlate to discourses concerning the commons. This essay analyzes SUG’s projects during the 1970s using a lens calibrated on the contemporary debate of the commons, to unveil and highlight some relevant aspects of their work. This lens will refer to Michel Bauwens and Tom Avermaete who differentiate between material commons, that is, human-made and -handled reserves of materials from our environments and cities; immaterial commons, knowledge and craft skills existing in a particular place; and commoning processes, social practices of mutual collaboration. The first goal of this research is to describe the work of SUG concerning its material and immaterial commons. The second goal is to inform the contemporary debate regarding waste and materials as a CPR to be unlocked by architects and users through commoning processes of materials reuse.","Commons; Common pool resources; Commoning process; Materials reuse; Circular economy; Architectural approach; Secondary use group; Martin Pawley","en","book chapter","Springer","","","","","","","2022-11-18","","","Theory, Territories & Transitions","","",""
"uuid:833ceea0-d96a-4886-aefe-4a5bbc59d0df","http://resolver.tudelft.nl/uuid:833ceea0-d96a-4886-aefe-4a5bbc59d0df","Fake It Till You Make It: Data Augmentation Using Generative Adversarial Networks for All the Crypto You Need on Small Devices","Mukhtar, Naila (Macquarie University); Batina, Lejla (Radboud Universiteit Nijmegen); Picek, S. (TU Delft Cyber Security; Radboud Universiteit Nijmegen); Kong, Yinan (Macquarie University)","Galbraith, Steven D. (editor)","2022","Deep learning-based side-channel analysis performance heavily depends on the dataset size and the number of instances in each target class. Both small and imbalanced datasets might lead to unsuccessful side-channel attacks. The attack performance can be improved by generating traces synthetically from the obtained data instances instead of collecting them from the target device, but this is a cumbersome and challenging task. We propose a novel data augmentation approach based on conditional Generative Adversarial Networks (cGAN) and Siamese networks, enhancing the attack capability. We also present a quantitative comparative deep learning-based side-channel analysis between a real raw signal leakage dataset and an artificially augmented leakage dataset. The analysis is performed on the leakage datasets for both symmetric and public-key cryptographic implementations. We investigate non-convergent networks’ effect on the generation of fake leakage signals using two cGAN based deep learning models. The analysis shows that the proposed data augmentation model results in a well-converged network that generates realistic leakage traces, which can be used to mount deep learning-based side-channel analysis successfully even when the dataset available from the device is not optimal. Our results show that the datasets enhanced with “faked” leakage traces are breakable (while not without augmentation), which might change how we perform deep learning-based side-channel analysis.","ASCAD; Data augmentation; Deep learning-based side-channel attacks; Elliptic curve cryptography; GANs; Signal processing","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Cyber Security","","",""
"uuid:438db919-e9a0-41e3-9f4c-a8f6f9dfc08b","http://resolver.tudelft.nl/uuid:438db919-e9a0-41e3-9f4c-a8f6f9dfc08b","Dismantling Digital Cages: Examining Design Practices for Public Algorithmic Systems","Nouws, S.J.J. (TU Delft Information and Communication Technology); Janssen, M.F.W.H.A. (TU Delft Engineering, Systems and Services); Dobbe, R.I.J. (TU Delft Information and Communication Technology)","Janssen, Marijn (editor); Csáki, Csaba (editor); Lindgren, Ida (editor); Melin, Ulf (editor); Loukis, Euripidis (editor); Viale Pereira, Gabriela (editor); Rodríguez Bolívar, Manuel Pedro (editor); Tambouris, Efthimios (editor)","2022","Algorithmic systems used in public administration can create or reinforce digital cages. A digital cage refers to algorithmic systems or information architectures that create their own reality through formalization, frequently resulting in incorrect automated decisions with severe impact on citizens. Although much research has identified how algorithmic artefacts can contribute to digital cages and their unintended consequences, the emergence of digital cages from human actions and institutions is poorly understood. Embracing a broader lens on how technology, human activity, and institutions shape each other, this paper explores what design practices in public organizations can result in the emergence of digital cages. Using Orlikowski’s structurational model of technology, we found four design practices in observations and interviews conducted at a consortium of public organizations. This study shows that design processes of public algorithmic systems (1) are often narrowly focused on technical artefacts, (2) disregard the normative basis for these systems, (3) depend on involved actors’ awareness of socio-technics in public algorithmic systems, (4) and are approached as linear rather than iterative. These four practices indicate that institutions and human actions in design processes can contribute to the emergence of digital cages, but also that institutional – opposed to technical – possibilities to address their unintended consequences are often ignored. Further research is needed to examine how design processes in public organizations can evolve into socio-technical processes, can become more democratic, and how power asymmetries in the design process can be mitigated.","Design process; Digital cage; Public algorithmic system; Structuration","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-04-30","","Engineering, Systems and Services","Information and Communication Technology","","",""
"uuid:1f583f1d-6a5e-4818-a047-f7b07fe64d9d","http://resolver.tudelft.nl/uuid:1f583f1d-6a5e-4818-a047-f7b07fe64d9d","Exploring Critical Urbanities: A Knowledge Co-Transfer Approach for Fragmented Cities in Water Landscapes","Janches, F. (TU Delft Environmental Technology and Design; University of Buenos Aires); Diedrich, Lisa (Swedish University of Agricultural Sciences); Sepulveda Carmona, D.A. (TU Delft Spatial Planning and Strategy)","Marinic, Gregory (editor); Meninato, Pablo (editor)","2022","The urban conditions of many metropolitan regions in the Global South are marked by growing informal settlements, growing inequalities, and socio-spatial fragmentation. They face alterations of their natural-spatial context imposed by climate change and new hydrological patterns. Knowledge is needed to direct their transformation toward more sustainable futures. Academia plays an important role in this knowledge production process that bridges disciplines and geographies. It ensures links to professional actors, public authorities, and civil society in their respective localities. This chapter introduces the adaptation of a more collaborative, trans-disciplinary, and multi-directional working method called “Beyond Best Practice” that raises research questions around ever-evolving, multi-actor collaborations from a design thinking perspective. These research experiences allowed us to promote an open-ended, co-transfer thematic, and methodological knowledge process by developing and testing ideas in real-world laboratory situations. Its results can be redirected to the Global North, where patterns of informality increasingly characterize hotspots of critical urbanity and, in turn, would benefit from knowledge sourced in the Global South.","Informal urbanism; Trans-disciplinary; Collaborative design process; Transferring knowledge; Site specific","en","book chapter","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Spatial Planning and Strategy","","",""
"uuid:9a41ec63-7636-4e0b-b033-f52d971afda8","http://resolver.tudelft.nl/uuid:9a41ec63-7636-4e0b-b033-f52d971afda8","Extended, Distributed, and Predictive: Sketches of a Generative Theory of Interaction for HCI","Browne, J.T. (TU Delft Methodologie en Organisatie van Design; Philips Research); Garnham, Ignacio (Aarhus University)","Stephanidis, Constantine (editor); Antona, Margherita (editor); Ntoa, Stavroula (editor); Salvendy, Gavriel (editor)","2022","This paper blends work in extended mind, distributed cognition, and predictive processing to provide a novel generative theory of interaction. This dovetailing offers an emerging picture of cognition that HCI stands to benefit from: our cognition is extended, distributed, and constantly trying to predict incoming sensory stimuli across social, cultural, and temporal scales. We develop a sketch of a generative theory of interaction for HCI and offer some directions for future work.","Artificial Intelligence; Distributed cognition; Extended mind; Generative theory of interaction; HCI; Predictive processing","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Methodologie en Organisatie van Design","","",""
"uuid:9244f01e-ca70-4595-83c8-7e6cfd6800d3","http://resolver.tudelft.nl/uuid:9244f01e-ca70-4595-83c8-7e6cfd6800d3","Gaussian Process Latent Force Models for Virtual Sensing in a Monopile-Based Offshore Wind Turbine","Zou, J. (TU Delft Offshore Engineering); Cicirello, A. (TU Delft Mechanics and Physics of Structures); Iliopoulos, Alexandros (Siemens); Lourens, E. (TU Delft Dynamics of Structures; TU Delft Offshore Engineering)","Rizzo, Piervincenzo (editor); Milazzo, Alberto (editor)","2022","Fatigue assessment in offshore wind turbine support structures requires the monitoring of strains below the mudline, where the highest bending moments occur. However, direct measurement of these strains is generally impractical. This paper presents the validation of a virtual sensing technique based on the Gaussian process latent force model for dynamic strain monitoring. The dataset, taken from an operating near-shore turbine in the Westermeerwind Park in the Netherlands, provides a unique opportunity for validation of strain estimates at locations below the mudline using strain gauges embedded within the monopile foundation.","Bayesian inference; Gaussian process; Offshore wind turbines; Virtual sensing","en","conference paper","Springer","","","","","","","2022-06-19","","","Offshore Engineering","","",""
"uuid:510e4f10-34c4-466a-bb1f-c161a52d87c6","http://resolver.tudelft.nl/uuid:510e4f10-34c4-466a-bb1f-c161a52d87c6","Markov Modulated Process to Model Human Mobility","Chang, Brian (Student TU Delft); Yang, Liufei (Student TU Delft); Sensi, M. (TU Delft Network Architectures and Services); Achterberg, M.A. (TU Delft Network Architectures and Services); Wang, F. (TU Delft Network Architectures and Services); Rinaldi, M. (TU Delft Transport and Planning); Van Mieghem, P.F.A. (TU Delft Network Architectures and Services)","Benito, Rosa Maria (editor); Cherifi, Chantal (editor); Cherifi, Hocine (editor); Moro, Esteban (editor); Rocha, Luis M. (editor); Sales-Pardo, Marta (editor)","2022","We introduce a Markov Modulated Process (MMP) to describe human mobility. We represent the mobility process as a time-varying graph, where a link specifies a connection between two nodes (humans) at any discrete time step. Each state of the Markov chain encodes a certain modification to the original graph. We show that our MMP model successfully captures the main features of a random mobility simulator, in which nodes moves in a square region. We apply our MMP model to human mobility, measured in a library.","Human mobility; Markov chains; Markov modulated process; Modeling; Time-varying networks","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-02-01","","","Network Architectures and Services","","",""
"uuid:61ebeb56-a107-436f-93d0-fb3eb2d54418","http://resolver.tudelft.nl/uuid:61ebeb56-a107-436f-93d0-fb3eb2d54418","Red Light/Green Light: A Lightweight Algorithm for, Possibly, Fraudulent Online Behavior Change Detection","Herrera Semenets, V. (Advanced Technologies Application Center); Hernández-León, Raudel (Advanced Technologies Application Center); Bustio-Martínez, Lázaro (Universidad Iberoamericana); van den Berg, Jan (TU Delft Cyber Security)","Pichardo Laguna, Obdulia (editor); Martínez-Miranda, Juan (editor); Martínez Seis, Bella (editor)","2022","Telecommunications services have become a constant in people’s lives. This has inspired fraudsters to carry out malicious activities causing economic losses to people and companies. Early detection of signs that suggest the possible occurrence of malicious activity would allow analysts to act in time and avoid unintended consequences. Modeling the behavior of users could identify when a significant change takes place. Following this idea, an algorithm for online behavior change detection in telecommunication services is proposed in this paper. The experimental results show that the new algorithm can identify behavioral changes related to unforeseen events.","Online data processing; Behavior changes; Anomaly detection; Concept drift; Cybersecurity; Multimodal data analysis","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2023-07-01","","","Cyber Security","","",""
"uuid:7f26d675-4e27-4ddb-bc84-1daa0f3d2eb2","http://resolver.tudelft.nl/uuid:7f26d675-4e27-4ddb-bc84-1daa0f3d2eb2","Towards unmanned cargo ships: A task based design process to identify economically viable low and unmanned ship concepts","Kooij, C. (TU Delft Ship Design, Production and Operations)","Hekkenberg, R.G. (promotor); Kana, A.A. (promotor); Delft University of Technology (degree granting institution)","2021","Unmanned and low-manned transport has increasingly been studied this past decade. While there have been successful trials for autonomous navigation, unmanned cargo ships are not commercially available yet. First, this dissertation investigates how changes to a ship’s systems and organizational structure can affect the crew’s size and composition. Then, a cost benefit analysis determines the economic viability of these concepts. This research concludes with feasible intermediate steps between a conventional ship and a fully unmanned ship.","Unmanned ships; low-manned ships; design process; autonomous ships; greedy algorithm","en","doctoral thesis","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:782fcd3e-0db4-42ee-965a-c180586759f4","http://resolver.tudelft.nl/uuid:782fcd3e-0db4-42ee-965a-c180586759f4","Preventing major hazard accidents through barrier performance monitoring","Schmitz, P.J.H. (TU Delft Safety and Security Science)","Reniers, G.L.L.M.E. (promotor); Swuste, P.H.J.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Foreseeing or even predicting major accidents is understandably challenging, both for any practitioner involved as for safety scientists and other academics. Understanding these events and trying to prevent them is a primary goal of a safety theory. Major hazard-related accidents rarely occur but when they do, they can cause many casualties and injured, and have major financial consequences due to production loss, material damage to the installation and/or environmental damage. Ultimately, major hazard-related accidents may ruin the company involved. Process safety is becoming more and more important in the process industry and is strongly linked to reliability, quality, productivity, security of supply, and good business....","process safety; indicator; ammonia; barrier; bowtie","en","doctoral thesis","","978-94-6419-348-0","","","","","","","","","Safety and Security Science","","",""
"uuid:2ba49af1-fa17-476c-88f4-97783ca4e39a","http://resolver.tudelft.nl/uuid:2ba49af1-fa17-476c-88f4-97783ca4e39a","Approximately Optimal Resource Management for Multi-Function Radar: Algorithmic Solutions Using a Generic Framework","Schöpe, M.I. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Driessen, J.N. (copromotor); Delft University of Technology (degree granting institution)","2021","Recent advances in Multi-Function Radar (MFR) systems led to an increase in their degrees of freedom. As a result, modern MFR systems are capable of adjusting many parameters during runtime. An automatic adaptation of the radar system to changing situations, like weather conditions, interference, or target maneuvers, is often mentioned in the context of MFR and is usually called Radar Resource Management (RRM). This thesis aims at developing a generic framework and approximately optimal algorithmic solutions for solving RRM problems. This is achieved by formulating the sensor tasks as Partially Observable Markov Decision Processes (POMDPs). Although the focus is on MFR, the approach is not limited to such sensor systems and has broader applicability.
In Chapter 2, a first step is taken by investigating Lagrangian Relaxation (LR) and the subgradient method for optimally distributing the sensor resources to the different tasks in a multi-target tracking scenario. A constrained optimization problem is formulated. Using LR, the constraints can be included in the cost function. In a time-invariant scenario, it is shown that the proposed Optimal Steady-State Budget Balancing (OSB) algorithm will lead to balanced budgets based on track parameters like maneuverability and measurement uncertainty. The time-invariant scenario is a special case of general tracking scenarios, and the presented solution can be seen as the optimal POMDP solution in that case. Since real-world applications quickly lead to time-varying scenarios, it is demonstrated how the approach can be extended to such cases. Finally, the proposed method is compared with other budget assignment strategies.
Subsequently, the tracking tasks are explicitly formulated as POMDPs, and the novel Approximately Optimal Dynamic Budget Balancing (AODB) algorithm is proposed in Chapter 3. The algorithm applies a combination of LR and Policy Rollout (PR). PR is a Monte Carlo sampling method for POMDPs to find the expected future cost. Due to its generic architecture, the framework can be applied to different radar or sensor systems and cost functions. In a time-invariant scenario, the algorithm calculates a solution close to the optimal steady-state solution, as presented in Chapter 2. This is shown through simulations of a two-dimensional tracking scenario. Moreover, it is demonstrated how the algorithm dynamically allocates the sensor time budgets to the tasks in a changing environment using a non-myopic fashion. Finally, the algorithm's performance is compared with different resource allocation techniques.
Based on the previous results, Chapter 4 conducts a detailed investigation of the computational load of the AODB algorithm. It is shown how the choice of several input parameters influences computational performance. Additionally, Model Predictive Control (MPC) is applied in the same framework as an alternative POMDP solution method. Compared to stochastic optimization methods such as PR, the computational load is dramatically reduced while the resource allocation results are similar. This is shown through simulations of dynamic multi-target tracking scenarios in which the cost and computational load of different approaches are compared.
So far, this thesis has used tracking scenarios to demonstrate the validity of the proposed algorithms. Chapter 5 shows how to apply the proposed framework and algorithmic solution to a multi-target joint tracking and classification scenario. It is shown that tracking and classification can be considered in a single task type. Furthermore, it is shown how the task resource allocations can be jointly optimized using a single carefully formulated cost function based on the task threat variance. Multiple two-dimensional radar scenarios demonstrate how sensor resources are allocated depending on the current knowledge of the target position and class.
Chapter 6 extends the single-sensor approach shown in the previous chapters to multiple sensors and demonstrates the usefulness of the proposed algorithm in two different multi-sensor multi-target tracking scenarios. The first scenario considers a generic surveillance situation. An approximately optimal approach based on the previously proposed algorithm is formulated assuming a central processor. Subsequently, a distributed implementation is introduced that converges to the same results as the centralized implementation and requires less computational resources. The performance of the proposed approach for both centralized and distributed implementation is demonstrated through dynamic tracking scenarios. The second scenario focuses explicitly on an automotive application. The proposed generic framework and algorithmic solution are used to allocate scarce resources across multiple mobile sensor nodes. A central system manages the nodes' transmission and shares sensing data with other sensor nodes if this improves the overall track accuracy. The proposed method allocates time and frequency resources. Through simulation of a typical traffic situation, the validity of the approach is demonstrated.
This thesis shows that the application of the proposed novel generic framework and algorithmic solution increases the performance w.r.t. heuristic solutions. Furthermore, it is demonstrated that the proposed framework allows the user to exchange elements such as cost function or POMDP solution method to adjust it to specific needs. The proposed method can be applied in many different areas involving different types of sensors. Possible applications include automotive scenarios, such as autonomous driving or traffic monitoring, (maritime) surveillance, and air traffic control.","Radar Resource Management; Lagrangian Relaxation; Partially Observable Markov Decision Process; Policy Rollout","en","doctoral thesis","","978-94-6384-263-1","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:20dd1357-2c56-446d-9635-d60edf2c0bd1","http://resolver.tudelft.nl/uuid:20dd1357-2c56-446d-9635-d60edf2c0bd1","Quantification of flyby effects in the three-body problem using the Gaussian process method","Liu, Y. (TU Delft Astrodynamics & Space Missions)","Visser, P.N.A.M. (promotor); Noomen, R. (copromotor); Delft University of Technology (degree granting institution)","2021","The gravity assist (GA) plays an important role in space missions since itwas first applied by the Luna 3 vehicle in 1959. For preliminary trajectory design, the so-called patchedconics model provides a simple model for a gravity assist. This approach, based on twobody formulations, splits amulti-body probleminto a succession of two-body problems. This model has a fundamental assumption: the trajectory of the spacecraft is driven by one celestial body only. A boundary for switching the driving bodies is defined by the Sphere of Influence (SoI) of the GA body. The patched conics model cannot be used to study low-energy trajectories. Moreover, it fails to describe special dynamics existing in the multi-body regime, such as the invariant manifolds. The three-body formulation is a logical choice to study the dynamics in the multi-body problem. In order to reduce its inherent difficulty, the circular restricted three-body problem (CR3BP) formulation is developed to study the behavior of the motion of a particle influenced by two massive bodies simultaneously. Flybys in the CR3BP have been studied by many researchers, using a numerical or semi-analytical approach, e.g. the Flyby map (FM) and Keplerianmap (KM), respectively. Inspired by these approaches and the idea of artificial intelligence, this thesis focuses on the investigation of flybys froma machine-learning perspective.","Gravity Assists; Circular Restricted Three-Body Problem; Gaussian Process Method; Gravity Assist Mapping; Jacobi Constant","en","doctoral thesis","","","","","","","","2022-08-31","","","Astrodynamics & Space Missions","","",""
"uuid:0a0344dc-b98b-4539-8456-2c6de4843315","http://resolver.tudelft.nl/uuid:0a0344dc-b98b-4539-8456-2c6de4843315","The Symmetric Exclusion Process and the Gausian Free Field on compact Riemannian manifolds","van Ginkel, G.J. (TU Delft Applied Probability)","Redig, F.H.J. (promotor); van Neerven, J.M.A.M. (promotor); Cipriani, A. (copromotor); Delft University of Technology (degree granting institution)","2021","In this thesis we study the Symmetric Exclusion Process (SEP) and the Discrete Gaussian Free Field (DGFF) on compact Riemannian manifolds. In particular, we obtain the hydrodynamic limit and the equilibrium fluctuations of SEP and we show that the DGFF converges to its continuous counterpart. To define these discrete models, we construct grids with edge weights that approximate the underlying manifold in a suitable way. Additionally, we study a model of an active particle and the role of reversibility for its limiting diffusion coeffcient and large deviations rate function.","Interacting particle systems; Hydrodynamic limit; Equilibrium fluctuations; (Discrete) Gaussian Free Field; Scaling limit; Active particle; Riemannian manifold; Stochastic processes","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:f5a82a4c-f472-4e41-b046-1e9bdb9de135","http://resolver.tudelft.nl/uuid:f5a82a4c-f472-4e41-b046-1e9bdb9de135","Predicting major hazard accidents by monitoring their barrier systems: A validation in retrospective","Schmitz, P.J.H. (TU Delft Safety and Security Science; OCI-Nitrogen); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); Swuste, P.H.J.J. (TU Delft Safety and Security Science)","","2021","OCI Nitrogen, one of Europe's largest fertilizer producers, is investigating the extent to which it is possible to take targeted measures at an early stage and stop the development of major hazard accident processes. An innovative model has been developed and recently explained and elaborated in a number of publications. This current paper contains a validation of the model by looking at the BP Texas City incident in 2005. The bowtie metaphor is used to visually present the BP Texas City refinery incident, showing the barrier system from different perspectives. Not only is the barrier system looked at from its trustworthiness on the day of the incident but also from the perspective of the control room operator, and from a design to current standards of best practice. The risk reductions of these different views are calculated and compared to their original design. In addition, evidence and findings from the investigations have been categorized as flaws and allocated to nine organizational factors. These flaws may affect the barrier system's quality or trustworthiness, or may act as ‘accident pathogens’ (see also Reason, 1990) creating latent, dangerous conditions. This paper sheds new light on the monitoring of accident processes and the barrier management to control them, and demonstrates that the BP Texas City refinery incident could have been foreseen using preventive barrier indicators and monitoring organizational factors.","Bowtie; Indicator; Management delivery system; Organizational factor; Process safety","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:4814315c-3230-497f-970f-2b9e760358ca","http://resolver.tudelft.nl/uuid:4814315c-3230-497f-970f-2b9e760358ca","Accelerated degradation tests with inspection effects","Zhao, Xiujie (Tianjin University); Chen, P. (TU Delft Statistics); Gaudoin, Olivier (Université Grenoble Alpes); Doyen, Laurent (Université Grenoble Alpes)","","2021","This study proposes a framework to analyze accelerated degradation testing (ADT) data in the presence of inspection effects. Motivated by a real dataset from the electric industry, we study two types of effects induced by inspections. After each inspection, the system degradation level instantaneously reduces by a random value. Meanwhile, the degrading rate is elevated afterwards. Considering the absence of observations due to practical reasons, we employ the expectation–maximization (EM) algorithm to analytically estimate the unknown parameters in a stepwise Wiener degradation process with covariates. Moreover, to maintain the level of generality for the adaption of the method in various scenarios, a confidence density approach is utilized to hierarchically estimate the parameters in the acceleration link function. The proposed methods can provide efficient parameter estimation under complex link functions with multiple stress factors. Further, confidence intervals are derived based on the large-sample approximation. Simulation studies and a case study from Schneider Electric are used to illustrate the proposed methods. The results show that the proposed model yields a remarkably better fit to the Schneider data in comparison to the conventional Wiener ADT model.","Accelerated degradation tests; Confidence density; Degradation modeling; Reliability; Wiener process","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:ea2a42a8-6f2b-4014-90e0-e5c742593307","http://resolver.tudelft.nl/uuid:ea2a42a8-6f2b-4014-90e0-e5c742593307","A biophysical model for seedling establishment in mangrove forests","Gijsman, R. (University of Twente, Netherlands); Horstman, E.M. (University of Twente, Netherlands); Willemsen, P.W.J.M. (University of Twente, Netherlands - Deltares, Netherlands); Swales, A. (National Institute of Water and Atmospheric Research, New Zealand); Wijnberg, K.M. (University of Twente, Netherlands)","","2021","Mangrove seedling establishment is crucial to the long-term development of mangrove forests. This study incorporates a process-based approach for seedling establishment in a process-based hydrodynamic model. The biophysical model is used to simulate seedling establishment in the Firth of Thames estuary (New Zealand). The results are compared to a random seedling establishment approach that has been often-used in long-term mangrove forest development models. While small differences were observed in terms of the seaward extent of seedling establishment, larger differences were found for the patchiness and density of the establishing seedlings. The results of the process-based approach showed a more localized pattern of seedling establishment, in line with field observations in the Firth of Thames. This pattern was opposed to the more spatially uniform establishment patterns predicted with the random establishment approach. These differences reveal that the implemented seedling establishment approach may affect long-term mangrove forest development models. Moreover, the process-based approach is more easily setup and calibrated with physical parameters that can be measured in the field.","Estuarine processes, fine sediments and vegeta; Nature-based solutions","en","conference paper","","","","","","","","","","","","","",""
"uuid:f2ddb159-137d-4f51-9a4c-a041533d2258","http://resolver.tudelft.nl/uuid:f2ddb159-137d-4f51-9a4c-a041533d2258","An investigation on salt marsh resilience to sea-level rise and increased storm intensity","Pannozzo, N. (University of Liverpool, UK); Leonardi, N. (University of Liverpool, UK); Carnacina, I. (Liverpool John Moores University, UK); Smedley, R. (University of Liverpool, UK)","","2021","Salt marshes are ecosystems with significant economic and environmental value. They provide numerous services, including nutrient removal, habitat provision and carbon sequestration (Barbier et al., 2011). They are also widely recognised as nature-based solutions for coastal defence due to their ability to buffer storm waves (Leonardi et al., 2018). However, it is still unclear how the combined impact of future sea-level rise and possible increases in storm intensity will affect salt marsh resilience (Schuerch et al. 2013). It has been observed that salt marshes can survive sea-level rise if sediment supply and organogenic production are high enough to allow marsh accretion (Kirwan et al., 2010, 2016). However, increasing rates of sea-level rise can lead to marsh drowning by increasing the accommodation space and the amount of sediment inputs required for marsh stability (Kirwan et al., 2010; Ganju et al., 2017). Marsh degradation can also be caused by lateral erosion triggered by wind waves, such as the ones generated during storms (Leonardi et al., 2016; Li et al., 2019). However, several studies have showed that, on the other hand, overwash by storm surges can support marsh resilience by delivering significant amount of sediment to marsh platforms (Walters and Kirwan, 2016; Castagno et al., 2018). This study investigates marsh resilience under the combined impact of various storm surge and sea-level scenarios by using a sediment budget approach. The current paradigm is that a positive sediment budget supports the survival and accretion of salt marshes, while a negative sediment budget causes marsh degradation (Ganju et al. 2015). The Ribble Estuary, North-West England, was used as a case study.","Estuarine processes, fine sediments and vegetation; Coasts and climate change","en","conference paper","","","","","","","","","","","","","",""
"uuid:1e3c53db-f9e2-4881-9b6f-0d283ddcac29","http://resolver.tudelft.nl/uuid:1e3c53db-f9e2-4881-9b6f-0d283ddcac29","Analysis of nonlinear ship-induced 3d wave fields using nonlinear fourier transforms","Zhang, H. (Delft University of Technology, Netherlands); Wahls, S. (Delft University of Technology, Netherlands); Brühl, M. (Delft University of Technology, Netherlands)","","2021","In the past decade, observations in the German estuaries such as the rivers Elbe and Weser show increasingly serious damage to bank protection structures (groins and revetments). This damage is caused mainly by waves induced by the passing of big container ships in the shallow and narrow maritime waterways. These ship-induced 3D wave fields consist of long-periodic primary and short-periodic secondary wave components. Due to missing design approaches for the load of long-period waves on rubble-mound revetments, the current risk assessment for protective structures in maritime waterways is based on short-period, wind-induced waves. Therefore, the structures do not ensure sufficient stability against the long-period ship-induced wave loads within the estuaries. Within the research project “Parameterization of nonlinear ship-induced 3D wave fields for the hydraulic design of protective structures in maritime waterways (PaNSiWa)”, we apply nonlinear Fourier transforms (NFTs) on experimentally generated ship waves in maritime waterways. The objective of the project is to provide better understanding of the underlying nonlinear structure of the long-period primary waves and to separate the nonlinear spectral basic components within the ship-wave data from their nonlinear wave-wave interactions. In this paper, we present first analyses of the decomposition of ship-wave measurements from experimental tests and the identification of hidden solitons within the long-period primary ship wave.","Estuarine processes, fine sediments and vegetation; Coastal hydrodynamics (waves, tides and surges)","en","conference paper","","","","","","","","","","","","","",""
"uuid:09ce864a-18d8-496e-8ff9-e0144e26bba5","http://resolver.tudelft.nl/uuid:09ce864a-18d8-496e-8ff9-e0144e26bba5","Graph filter designs and implementations","Liu, J. (TU Delft Signal Processing Systems)","Leus, G.J.T. (promotor); Delft University of Technology (degree granting institution)","2021","The ability to model irregular data and the interactions between them have
extended the traditional signal processing tools to the graph domain. Under
these circumstances, the emergence of graph signal processing has offered a
brand new framework for dealing with complex data. In particular, the graph
Fourier transform (GFT) lets us analyze the spectral components of a graph signal in the graph frequency domain. Based on the GFT, graph filters provide useful tools to modify or extract spectral parts in terms of different objectives, e.g., using a low-pass graph filter to construct graph signals without noise. This thesis mainly focuses on designing and implementing graph filters. Similar to traditional signal processing, we investigate two types of graph filters: finite impulse response (FIR) and infinite impulse response (IIR) graph filters. Moreover, this thesis takes both undirected and directed graphs into account for the design methods and implementations.
One of these new views comes from the field of graph signal processing which provides models and tools to understand and process data coming from such complex systems. With a principled view, coming from its signal processing background, graph signal processing establishes the basis for addressing problems involving data defined over interconnected systems by combining knowledge from graph and network theory with signal processing tools. In this thesis, our goal is to advance the current state-of-the-art by studying the processing of network data using graph filters, the workhorse of graph signal processing, and by proposing methods for identifying the topology (interactions) of a network from network measurements.
To extend the capabilities of current graph filters, the network-domain counterparts of time-domain filters, we introduce a generalization of graph filters. This new family of filters does not only provide more flexibility in terms of processing networked data distributively but also reduces the communications in typical network applications, such as distributed consensus or beamforming. Furthermore, we theoretically characterize these generalized graph filters and also propose a practical and numerically-amenable cascaded implementation.
As allmethods in graph signal processingmake use of the structure of the network, we require to know the topology. Therefore, identifying the network interconnections from networked data is much needed for appropriately processing this data. In this thesis, we pose the network topology identification problem through the lens of system identification and study the effect of collecting information only from part of the elements of the network. We show that by using the state-space formalism, algebraic methods can be applied to the network identification problem successfully. Further, we demonstrate that for the partially-observable case, although ambiguities arise, we can still retrieve a coherent network topology leveraging state-of-the-art optimization techniques.","distributed processing; graph filtering; graph theory; graph signal processing; topology identification","en","doctoral thesis","","978-94-6416-560-9","","","","","","","","","Signal Processing Systems","","",""
"uuid:4dd0034d-587e-4b9b-9b97-0a24210af123","http://resolver.tudelft.nl/uuid:4dd0034d-587e-4b9b-9b97-0a24210af123","Ultrasonic welding of epoxy- to thermoplastic-based composites","Tsiangou, E. (TU Delft Aerospace Structures & Computational Mechanics)","Villegas, I.F. (promotor); Benedictus, R. (promotor); Teixeira De Freitas, S. (copromotor); Delft University of Technology (degree granting institution)","2021","Welding is a promising alternative to mechanical fastening, as currently used, to join dissimilar (i.e., thermoset- to thermoplastic-based) composite parts in modern aircraft. Thermoset composites can be indirectly welded through a thermoplastic coupling layer co-cured on the surface of the laminate that needs to be welded. One of the main challenges when welding thermoset to thermoplastic composites, is the high welding temperatures that are needed to melt the thermoplastic matrix, especially when high-performance thermoplastic polymers are used such as in aerospace applications. The most efficient way to overcome this challenge is by ensuring very fast and localized heating in order to prevent thermal degradation mechanisms from occurring. Out of the currently most developed welding methods, ultrasonic welding can offer exceptionally short heating times of even less than 500 ms, which makes it an excellent candidate for joining thermoset and thermoplastic composites. However, further understanding of the process as applied to dissimilar composite joints is still lacking in order for it to be utilized in actual applications. The aim of this PhD thesis was to further the knowledge on ultrasonic welding of thermoset to thermoplastic composites by firstly identifying suitable practices for successfully welding the dissimilar composites and secondly assessing the robustness of the ultrasonic welding process with respect to changes in process parameters. The comparable strength of the welded, dissimilar composite joints to both co-cured, dissimilar composite joints and to welded, thermoplastic composite joints, demonstrated that ultrasonic welding is a very promising joining technique. Moreover, this process was proven to be robust (with respect to the variations in the heating time), since despite the sensitivity of the thermoset composite adherend to the high welding temperatures, a relatively wide processing interval, i.e., range of heating times that result in a certain mechanical performance, could be obtained. Additionally, the weld strength presented a certain degree of insensitivity to changes in the process parameters, i.e., welding force and amplitude of vibrations.","CFRP; thermoplastic composites; thermoset composites; ultrasonic welding; process parameters; energy director","en","doctoral thesis","","978-94-6421-307-2","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:40af58ca-9f3b-491f-8f21-998b45bfecb8","http://resolver.tudelft.nl/uuid:40af58ca-9f3b-491f-8f21-998b45bfecb8","Numerical Modelling for Underwater Excavation Process: A Method Based on DEM and FVM","Chen, X. (TU Delft Offshore and Dredging Engineering)","van Rhee, C. (promotor); Miedema, S.A. (promotor); Delft University of Technology (degree granting institution)","2021","A 3D dynamic numerical model is established for modelling the excavation process for dredging purposes. The interaction between the solid and fluid phases is realized by a specially designed DEM-FVM coupling mechanism, where the fluid-particle interaction forces, the volume fraction information and the particle information are constantly updated and exchanged. Dry and underwater sand cutting simulations are conducted and validated against experimental results. Simulation results of cutting of cohesive soil in atmospheric condition match with the experimental data within acceptable error margin, while the underwater cutting simulations of cohesive soil have not been validated due to the lack of experimental data. Besides, the general applicability of using Discrete Element Modelling (DEM) to create rock samples, and the calibration of DEM rock samples have been investigated, which are essential for conducting atmospheric and underwater rock cutting simulations in the future.","Discrete element modelling; Excavation Process; DEM-FVM Coupling","en","doctoral thesis","","978-94-6384-204-4","","","","","","2022-04-01","","","Offshore and Dredging Engineering","","",""
"uuid:5437884e-0078-4b36-b2c7-c6edfea3b418","http://resolver.tudelft.nl/uuid:5437884e-0078-4b36-b2c7-c6edfea3b418","The Intersection of Planning and Learning","Moerland, T.M. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); Plaat, Aske (promotor); Broekens, D.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Intelligent sequential decision making is a key challenge in artificial intelligence. The problem, commonly formalized as a Markov Decision Process, is studied in two different research communities: planning and reinforcement learning. Departing from a fundamentally different assumption about the type of access to the environment, both research fields have developed their own solution approaches and conventions. The combination of both fields, known as model-based reinforcement learning, has recently shown state-of-the-art results, for example defeating human experts in classic board games like Chess and Go. Nevertheless, literature lacks an integrated view on 1) the similarities between planning and learning, and 2) the possible combinations of both. This dissertation aims to fill this gap. The first half of the book presents a conceptual answer to both questions. We first present a framework that disentangles the common algorithmic space of both fields, showing that they essentially face the same algorithmic design decisions. Moreover, we also present an overview of the different ways in which planning and learning can be combined in one algorithm. The second half of the dissertation provides experimental illustration of these ideas. We present several new combinations of planning and learning, such as a flexible method to learn stochastic dynamics models with neural networks, an extension of a successful planning-learning algorithm (AlphaZero) to deal with continuous action spaces, and a study of the empirical trade-off between planning and learning. Finally, we also illustrate the commonalities between both fields, by designing a new algorithm in one field based on inspiration from the other field. We conclude the thesis with an outlook for the planning-learning field as a whole. Altogether, the dissertation provides a broad theoretical and empirical view on the combination of planning and learning, which promises to be an important frontier in artificial intelligence research in the coming years.
practice and consists of two parts, respectively. Part one comprises a transversal genealogy of signal processing, questioning how associated technologies are appropriated and employed by various social, cultural, and artistic movements in the production of subjectivity and provides a conceptual framework for the design-driven part. Part two focuses on design and composition in reciprocal connection with theory. Through a series of projects it aims to develop a
deterritorialised architectural machine, an operational diagram, which is meant to enable processes of reterritorialisation by modifying existing sites sonically. This paper highlights two conceptual components of this machine and discusses the theoretical framework and previous projects from which they derive.","reterritorialisation; sonic space; signal processing; machinic subservience","en","conference paper","HafenCity University","","","","","","","","","","Theory, Territories & Transitions","","",""
"uuid:8884ff41-0318-4d35-a2d1-56b194c12d69","http://resolver.tudelft.nl/uuid:8884ff41-0318-4d35-a2d1-56b194c12d69","Estimation of Spectral Notches from Pinna Meshes: Insights from a Simple Computational Model","Spagnol, S. (TU Delft Design Aesthetics); Miccini, Riccardo (Aalborg University); Onofrei, Marius George (Aalborg University); Unnthorsson, Runar (University of Iceland); Serafin, Stefania (Aalborg University)","","2021","While previous research on spatial sound perception investigated the physical mechanisms producing the most relevant elevation cues, how spectral notches are generated and related to the individual morphology of the human pinna is still a topic of debate. Correctly modeling these important elevation cues, and in particular the lowest frequency notches, is an essential step for individualizing Head-Related Transfer Functions (HRTFs). In this paper we propose a simple computational model able to predict the center frequencies of pinna notches from ear meshes. We apply such a model to a highly controlled HRTF dataset built with the specific purpose of understanding the contribution of the pinna to the HRTF. Results show that the computational model is able to approximate the lowest frequency notch with improved accuracy with respect to other state-of-the-art methods. By contrast, the model fails to predict higher-order pinna notches correctly. The proposed approximation supplements understanding of the morphology involved in generating spectral notches in experimental HRTFs.","Acoustic measurements; audio signal processing; Computational modeling; head-related transfer functions (HRTFs); HRTF individualization; Location awareness; pinna; Predictive models; Solid modeling; Spatial audio; spatial hearing; Speech processing; Three-dimensional displays","en","journal article","","","","","","","","","","","Design Aesthetics","","",""
"uuid:641b229c-b904-4e3d-9d3f-8976c2787dcb","http://resolver.tudelft.nl/uuid:641b229c-b904-4e3d-9d3f-8976c2787dcb","Matrix-Pencil Approach-Based Interference Mitigation for FMCW Radar Systems","Wang, J. (TU Delft Microwave Sensing, Signals & Systems); Ding, M. (TU Delft Electrical Engineering, Mathematics and Computer Science); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2021","A novel matrix-pencil (MP)-based interference mitigation approach for frequency-modulated continuous-wave (FMCW) radars is proposed in this article. The interference-contaminated segment of the beat signal is first cut out, and then, the signal samples in the cutout region are reconstructed by modeling the beat signal as a sum of complex exponentials and using the MP method to estimate their parameters. The efficiency of the proposed approach for the interference with different parameters (i.e., interference duration, signal-to-noise ratio (SNR), and different target scenarios) is investigated by means of numerical simulations. The proposed interference mitigation approach is intensively verified on experimental data. Comparisons of the proposed approach with the zeroing and other beat-frequency interpolation techniques are presented. The results indicate the broad applicability and superiority of the proposed approach, especially in low SNR and long interference duration situations.","Chirp; Extrapolation; Frequency-modulated continuous-wave (FMCW) radar; Interference; interference mitigation; matrix pencil; Radar; Radar antennas; Radar signal processing; signal fusion.; Signal to noise ratio; signal fusion","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-03-01","Electrical Engineering, Mathematics and Computer Science","","Microwave Sensing, Signals & Systems","","",""
"uuid:ed207da0-7661-43de-9be0-5a5ae0fe6d70","http://resolver.tudelft.nl/uuid:ed207da0-7661-43de-9be0-5a5ae0fe6d70","Analysis of nonlinear ship-induced 3d wave fields using nonlinear fourier transforms","Zhang, H. (TU Delft Team Sander Wahls); Wahls, S. (TU Delft Team Sander Wahls); Brühl, M. (TU Delft Team Sander Wahls)","","2021","In the past decade, observations in the German estuaries such as the rivers Elbe and Weser show increasingly serious damage to bank protection structures (groins and revetments). This damage is caused mainly by waves induced by the passing of big container ships in the shallow and narrow maritime waterways. These ship-induced 3D wave fields consist of long-periodic primary and short-periodic secondary wave components. Due to missing design approaches for the load of long-period waves on rubble-mound revetments, the current risk assessment for protective structures in maritime waterways is based on short-period, wind-induced waves. Therefore, the structures do not ensure sufficient stability against the long-period ship-induced wave loads within the estuaries.
Within the research project “Parameterization of nonlinear ship-induced 3D wave fields for the hydraulic design of protective structures in maritime waterways (PaNSiWa)”, we apply nonlinear Fourier transforms (NFTs) on experimentally generated ship waves in maritime waterways. The objective of the project is to provide better understanding of the underlying nonlinear structure of the long-period primary waves and to separate the nonlinear spectral basic components within the ship-wave data from their nonlinear wave-wave interactions. In this paper, we present first analyses of the decomposition of ship-wave measurements from experimental tests and the identification of hidden solitons within the long-period primary ship wave.
The exploration of the gathered information shows that within the broader scope of sustainability, circularity is the most mentioned set of aspects that currently have a clear impact on façade design, closely followed by energy related aspects, and further below issues related to the user, nature inclusion, and value. Furthermore, it is possible to identify different and sometimes clashing approaches derived from different notions of sustainability: some interviewees believe in permanence and timeless buildings, which leads to massive structures and detailing focused on ageing and durability; while for others it mainly revolves around using less raw materials and reuse/recycling potential of building components; which leads to light structures, with focus on connections aiming for total disassembly and material recovery. These, among others, should be regarded as possibilities to choose from a set of potential approaches, whose suitability should be carefully assessed to match each project brief, under the larger aim to design and build sustainable façades, buildings and cities.","Façade design; Sustainability; Design process","en","conference paper","TU Delft OPEN Publishing","","","","","","","","","","Design of Constrution","","",""
"uuid:2434f00d-78e4-4461-8e44-09c7f23c53fc","http://resolver.tudelft.nl/uuid:2434f00d-78e4-4461-8e44-09c7f23c53fc","Decay Patterns And Damage Processes Of Historic Concrete: A Survey In The Netherlands","Pardo Redondo, G. (TU Delft Heritage & Technology); Naldini, S. (TU Delft Heritage & Technology); Lubelli, B. (TU Delft Heritage & Technology)","Roca, P. (editor); Pelà, L. (editor); Molins, C. (editor)","2021","Historic concrete buildings (end of 19th century – 1960s), because of their “experimental” character, require a specific approach to both survey and conservation. Although they were built with empirical approaches, some buildings show a fair state of conservation and resilience –even though they have already exceeded the 100-year threshold– while others of comparable age are in severe need of restoration. As part of the European project CONSECH20, aimed at contributing to the conservation of cultural-heritage concrete buildings, this paper investigates what are the most common types of damage and hypothetical causes, and what direct and non-direct parameters can lead to a faster or slower deterioration of historic concrete in the Netherlands. The research is based on an initial screening study, which will be used as a basis for a larger research among the participant countries. The current research is divided in three phases. Firstly, a selection of 15 case studies from the Netherlands are investigated; the selection was based on criteria of age, state of conservation and type of ownership. Secondly, the history and materials of the buildings are examined. Thirdly, an on-site visual survey is performed per each building, with pre-design templates, to identify types of damage, extent and severity. The data is then analysed combining different factors with a calculated index of severity. Results are discussed and contrasted to provide further clarification of the degradation of historic concrete. A fourth phase, not discussed in this paper, will use this methodology in a broader context, with a larger number of case studies in different countries.","Historic structures; Concrete; Decay patterns; Damage processes; Assessment","en","conference paper","International Centre for Numerical Methods in Engineering, CIMNE","","","","","","","","","","Heritage & Technology","","",""
"uuid:ba5b5183-4feb-4f5a-9469-2b553486e849","http://resolver.tudelft.nl/uuid:ba5b5183-4feb-4f5a-9469-2b553486e849","Distributed Augmented Lagrangian Method for Link-Based Resource Sharing Problems of Multi-Agent Systems","Ananduta, W. (TU Delft Team Sergio Grammatico); Nedic, Angelia (Arizona State University); Ocampo-Martinez, Carlos (Universitat Politecnica de Catalunya)","","2021","A multi-agent optimization problem motivated by the management of energy systems is discussed. The associated cost function is separable and convex although not necessarily strongly convex and there exist edge-based coupling equality constraints. In this regard, we propose a distributed algorithm based on solving the dual of the augmented problem. Furthermore, we consider that the communication network might be time-varying and the algorithm might be carried out asynchronously. The time-varying nature and the asynchronicity are modeled as random processes. Then, we show the convergence and the convergence rate of the proposed algorithm under the aforementioned conditions.","asynchronous method; Communication networks; Convergence; Cost function; Couplings; Distributed algorithms; multi-agent optimization; Optimization; Random processes; stochastic timevarying network","en","journal article","","","","","","Accepted Author Manuscript","","","","","Team Sergio Grammatico","","",""
"uuid:e3ca52bc-47ee-4176-bb30-b7538dec9e53","http://resolver.tudelft.nl/uuid:e3ca52bc-47ee-4176-bb30-b7538dec9e53","Additive Manufacturing and Spark Plasma Sintering of Lunar Regolith for Functionally Graded Materials","Laot, M.A.L. (Student TU Delft); Rich, Belinda (European Space Agency (ESA)); Cheibas, Ina (European Space Agency (ESA)); Fu, J. (TU Delft Team Marcel Hermans); Zhu, Jia-Ning (TU Delft Team Vera Popovich); Popovich, V. (TU Delft Team Vera Popovich)","","2021","This study investigates the feasibility of in-situ manufacturing of a functionally graded metallic-regolith. To fabricate the gradient, digital light processing, an additive manufacturing technique, and spark plasma sintering were selected due to their compatibility with metallic-ceramic processing in a space environment. The chosen methods were first assessed for their ability to effectively consolidate regolith alone, before progressing to sintering regolith directly onto metallic substrates. Optimized processing conditions based on the sintering temperature, initial powder particle size, and different compositions of the lunar regolith powders were identified. Experiments have successfully proven the consolidation of lunar regolith simulants at 1050°C under 80 MPa with digital light processing and spark plasma sintering, while the metallic powders can be fully densified at relatively low temperatures and a pressure of 50 MPa with spark plasma sintering. Furthermore, the lunar regolith and Ti 6 Al 4 V gradient was proven to be the most promising combination. While the current study showed that it is feasible to manufacture a functionally graded metallic-regolith, further developments of a fully optimized method have the potential to produce tailored, high-performance materials in an off-earth manufacturing setting for the production of aerospace, robotic, or architectural components.","in-situ resource utilisation; regolith; additive manufacturing; digital light processing; spark plasma sintering; direct laser deposition","en","journal article","","","","","","","","","","","Team Vera Popovich","","",""
"uuid:520370e1-8828-484a-98df-b751b30a52e0","http://resolver.tudelft.nl/uuid:520370e1-8828-484a-98df-b751b30a52e0","Graph-time convolutional neural networks","Isufi, E. (TU Delft Multimedia Computing); Mazzola, Gabriele (Student TU Delft)","","2021","Spatiotemporal data can be represented as a process over a graph, which captures their spatial relationships either explicitly or implicitly. How to leverage such a structure for learning representations is one of the key challenges when working with graphs. In this paper, we represent the spatiotemporal relationships through product graphs and develop a first principle graph-time convolutional neural network (GTCNN). The GTCNN is a compositional architecture with each layer comprising a graph-time convolutional module, a graphtime pooling module, and a nonlinearity. We develop a graph-time convolutional filter by following the shift-and-sum principles of the convolutional operator to learn higher-level features over the product graph. The product graph itself is parametric so that we can learn also the spatiotemporal coupling from data. We develop a zero-pad pooling that preserves the spatial graph (the prior about the data) while reducing the number of active nodes and the parameters. Experimental results with synthetic and real data corroborate the different components and compare with baseline and state-of-the-art solutions.","Graph neural networks; Graph signal processing; Graph-time neural networks; Spatiotemporal learning","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Multimedia Computing","","",""
"uuid:362519af-4dba-484c-b4b3-9446c49b3e7f","http://resolver.tudelft.nl/uuid:362519af-4dba-484c-b4b3-9446c49b3e7f","Virtual reality supported design of smart grasper","Djokikj, Jelena (SS Cyril and Methodius University); Rizov, Tashko (SS Cyril and Methodius University); Jovanova, J. (TU Delft Transport Engineering and Logistics)","","2021","Smart material graspers have shown potential for different applications in terms of functionality and actuation, especially in handling arbitrary shapes, fragile objects and complex 3D geometries. However, to take these initial designs further towards real applications, the challenge remains to determine the optimal size, shape, and passive and smart material location. Virtual reality can be beneficial in the early concept generation as it can help visualize and understand the grasping process. The access to suitable hardware and the development of virtual reality (VR) software has resulted in increased use of this technology. The 3D visualization offered by VR especially in the early stages of the design process assists engineers in making appropriate and efficient decisions, and it can also support the interaction with the end user to iterate on potential design improvements. The conceptual phase is often overlooked and rushed by the other departments involved in the design and development process although it is of great importance for successful outcome. It is important to make the most of it in order to assure quality result. In order to ensure short conceptual phase that will not reflect on the products' quality we propose introduction of the VR in the early stages of the design process. In this paper we show how the use of VR can be beneficial in new product development. In this case we focus on the design of smart material grasper.","Conceptual design phase; Design process; Smart materials grasper; Virtual reality (VR)","en","conference paper","ASME","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-04-20","","","Transport Engineering and Logistics","","",""
"uuid:3ef2449f-c722-44a0-9d6c-78ffff66f520","http://resolver.tudelft.nl/uuid:3ef2449f-c722-44a0-9d6c-78ffff66f520","Degradation of Aqueous CONFIDOR® Pesticide by Simultaneous TiO2 Photocatalysis and Fe-Zeolite Catalytic Ozonation","Raashid, Muhammad (University of Engineering and Technology Lahore); Kazmi, Mohsin (University of Engineering and Technology Lahore); Ikhlaq, Amir (University of Engineering and Technology Lahore); Iqbal, Tanveer (University of Engineering and Technology Lahore); Sulaiman, Muhammad (University of Engineering and Technology Lahore); Shakeel, A. (TU Delft Rivers, Ports, Waterways and Dredging Engineering; University of Engineering and Technology Lahore)","","2021","Due to the importance of water for human survival and scarcity of freshwater resources, wastewater treatment has become very important recently. Some persistent pollutants, such as pesticides, are not removed even after multiple conventional wastewater treatment techniques. Advanced oxidation processes (AOPs) are one of the novel techniques that can be used to treat these persistent compounds. Photocatalytic ozonation is a promising AOP that combines photocatalysis and ozonation for synergistic effects and faster degradation of persistent pollutants. However, usually, only a photocatalyst is used while combining photocatalysis and ozonation. In this work, both a photocatalyst and ozonation catalyst have been simultaneously used for the degradation of commercially available CONFIDOR® pesticide, a Bayer product with Imidacloprid as the active ingredient. TiO2 is employed as a photocatalyst, and Fe-coated Zeolite is employed as an ozonation catalyst. The results show that the reaction rate increases by 1.4 times if both catalysts are used as compared to the use of one photocatalyst only. Almost complete removal (>99%) of pollutant is achieved after 20 min with the simultaneous use of a catalyst when imidacloprid with an initial concentration of 100 mg/L is subjected to 250 W/m2 UV of a wavelength of 253.7 nm and 100 mg/h ozone, where it takes 30 min if only one photocatalyst is used. The paper also explores the effect of initial concentration, UV intensity, catalyst dose and catalyst reuse while also briefly discussing the kinetics and mechanism.","advanced oxidation processes; photocatalytic ozonation; Fe-coated zeolite; pesticide wastewater treatment","en","journal article","","","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:03247062-f3d6-4347-9325-a6031077dd57","http://resolver.tudelft.nl/uuid:03247062-f3d6-4347-9325-a6031077dd57","Prototyping in social VR: Anticipate the unanticipated outcomes of interactions between AI-powered solutions and users.","Mariani, Elena (Politecnico di Milano); Kooijman, F.S.C. (Student TU Delft); Shah, P. (TU Delft Methodologie en Organisatie van Design); Stoimenova, N. (TU Delft Methodologie en Organisatie van Design)","","2021","Interactions of users with AI powered solutions (AIPS) have the potential to affect collective behaviours and amplify unanticipated outcomes. Product developers, organisations, and companies are increasingly being expected to take responsibility for the unanticipated outcomes of their products. In this paper we explore a proactive approach to prototyping AIPS-user interactions using Social Virtual Reality (SVR) environments that allows for the anticipation of potential outcomes. We contend that doing so would limit the detrimental effect outcomes could have on product developers' resources and reputation.","Artificial intelligence; Design process; Unanticipated outcomes; Virtual Prototyping; Virtual reality","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:aecbe124-a922-4aff-bfb5-0cc960a35590","http://resolver.tudelft.nl/uuid:aecbe124-a922-4aff-bfb5-0cc960a35590","Concept for a persona driven recommendation tool for process modelling approaches","Helten, Katharina (Vitesco Technologies); Eckert, Claudia (The Open University); Gericke, Kilian (Universität Rostock); Vermaas, P.E. (TU Delft Ethics & Philosophy of Technology)","","2021","In order to ensure successful product development processes, manifold modelling approaches have been developed, which cover a wide range of aspects such as responsibilities, duration of activities and dependencies. Still, an industry standard does not exist. Users of process modelling approaches are driven by different targets depending on the respective role. Currently, practitioners need to evaluate strengths and weaknesses of each approach by themselves and find little guidance for the selection. As a consequence, users might select unsuitable approaches and do not get the expected result. Thus, the intended applications of the model such as analyses or an optimization of the process are hampered. This could heavily affect companies´ success by product or project failures. The paper shows the concept of a recommendation tool that enables a suitable and effective selection of process modelling approaches. Key element is the description of relevant use cases and personas that represent the various needs of both different company types as well as different roles within such as process modellers and users. By identifying the most relevant case, each practitioner will be successfully guided to the most suitable modelling approach.","Design practice; Design process; Personas; Process modelling; Recommendation tool","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:64d7cb6d-336c-4804-aea9-c633ebae5a58","http://resolver.tudelft.nl/uuid:64d7cb6d-336c-4804-aea9-c633ebae5a58","A Healthy Metaphor?: The North Sea Consultation and the Power of Words","Haye Geukes, H. (Student TU Delft); Pesch, U. (TU Delft Ethics & Philosophy of Technology); Correljé, A. (TU Delft Economics of Technology and Innovation); Taebi, B. (TU Delft Ethics & Philosophy of Technology)","","2021","The North Sea Consultation was set up to resolve conflicting claims for space in the North Sea. In 2020, this consultation process resulted in the North Sea Agreement, which was supported by the Dutch Parliament and cabinet as a long-term policy; however, the fishing sector felt excluded, left the consultation process, and does not support the agreement. Using semi-constructed interviews and the method of wide reflective equilibrium, this research found that in this conflict the metaphor of ‘health’ has played a decisive role. While all stakeholders want to keep the sea ‘healthy’, they disagree on what a healthy sea actually means, leading to contrastive positions on the desirability of trawler fishing, wind parks, and conservation areas—the North Sea Agreement’s main foci of interest. To prevent the unproductive escalation of such a conflict, it is inevitable to acknowledge the moral connotations of such metaphors, as this allows a decision-making process that can be considered more just.","Ecological health; Fishery; Metaphors; North Sea agreement; North Sea consultation; Offshore wind energy; Political negotiation processes; Wicked problems","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:e7ace41d-03d3-4d41-ae35-4e61f4598f89","http://resolver.tudelft.nl/uuid:e7ace41d-03d3-4d41-ae35-4e61f4598f89","The impact of data on the role of designers and their process","Lu, Jiahao (Student TU Delft); Gomez Ortega, A. (TU Delft Internet of Things); Gonçalves, M. (TU Delft Methodologie en Organisatie van Design); Bourgeois, J. (TU Delft Internet of Things)","","2021","With the advance of the Internet and the Internet of Things, an abundance of 'big' data becomes available. Data science can be incorporated in design, which brings forward various opportunities for designers to benefit from this new material. However, the designer's perspective and their role remains unclear. How do they think about and approach data? What do they want to achieve with this data? What do they need to take ownership of designing with data? In this paper we take a design perspective to map the opportunities and challenges of leveraging large data-sets as part of the design process. We rely on a survey with 75 participants across a Faculty of Industrial Design Engineering and in-depth reflective interviews with a subset of 9 participants. We discuss the impact of data on the roles designers can adopt as well as an approach to designing with data. This work aims to inform on educational support, data literacy and tools needed for designers to take advantage of this new era of design digitalisation.","Big data; Data literacy; Design education; Design process; Designers' roles","en","journal article","","","","","","","","","","","Internet of Things","","",""
"uuid:aa56e345-a6c2-49ff-85c5-e77cf0deaba4","http://resolver.tudelft.nl/uuid:aa56e345-a6c2-49ff-85c5-e77cf0deaba4","Mitigating company adoption barriers of design-driven innovation with human centered design","Baha, S.E. (TU Delft Methodologie en Organisatie van Design; University of Quebec; Meaningwise); Ghei, Taresh (Philips Design Innovation); Kranzbühler, A. (TU Delft Marketing and Consumer Research)","","2021","In Design-Driven Innovation (D-DI) the meaning of a product or service is radically innovated to introduce a new paradigm that ideally can benefit people, companies, and society as a whole. However, due to the associated risks, most companies are hesitant to engage with and adopt D-DI. Human Centered Design (HCD) is preferred while innovation is limited to incremental change. This dichotomy is also reflected in design literature where D-DI is pitted against HCD. We propose the symbiosis of the two approaches as a strategy to create space for and the adoption of D-DI within companies. An instrumental design case study explores a design-driven service innovation and its adoption in a renowned airline. Results show an adopted D-DI where HCD evidence mitigates for the market and organization uncertainty while D-DI enabled a paradigm shift in the company's current service operation. Advantages and limitations of this mitigation strategy are discussed. With this design precedent, we aim to encourage designers and companies to further explore the benefits of a symbiotic use of D-DI and HCD.","Case study; Design process; Design-driven innovation; Human Centered Design; Service design","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:4016858b-a648-4a69-9c16-b8e2b9df57da","http://resolver.tudelft.nl/uuid:4016858b-a648-4a69-9c16-b8e2b9df57da","Source deghosting of coarsely sampled common-receiver data using a convolutional neural network","Vrolijk, J. (TU Delft Applied Geophysics and Petrophysics); Blacquière, G. (TU Delft Applied Geophysics and Petrophysics)","","2021","It is well known that source deghosting can best be applied to common-receiver gathers, whereas receiver deghosting can best be applied to common-shot records. The source-ghost wavefield observed in the common-shot domain contains the imprint of the subsurface, which complicates source deghosting in the common-shot domain, in particular when the subsurface is complex. Unfortunately, the alternative, that is, the common-receiver domain, is often coarsely sampled, which complicates source deghosting in this domain as well. To solve the latter issue, we have trained a convolutional neural network to apply source deghosting in this domain. We subsample all shot records with and without the receiver-ghost wavefield to obtain the training data. Due to reciprocity, these training data are a representative data set for source deghosting in the coarse common-receiver domain. We validate the machine-learning approach on simulated data and on field data. The machine-learning approach gives a significant uplift to the simulated data compared to conventional source deghosting. The field-data results confirm that the proposed machine-learning approach can remove the source-ghost wavefield from the coarsely sampled common-receiver gathers.","Aliasing; Artificial intelligence; Common receiver; Processing; Sampling","en","journal article","","","","","","Accepted Author Manuscript","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:a887415a-3e1c-4c01-aabe-d79bbb6a4502","http://resolver.tudelft.nl/uuid:a887415a-3e1c-4c01-aabe-d79bbb6a4502","Sampling Graph Signals with Sparse Dictionary Representation","Zhang, Kaiwen (Student TU Delft); Coutino, Mario (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing)","","2021","Graph sampling strategies require the signal to be relatively sparse in an alternative domain, e.g. bandlimitedness for reconstructing the signal. When such a condition is violated or its approximation demands a large bandwidth, the reconstruction often comes with unsatisfactory results even with large samples. In this paper, we propose an alternative sampling strategy based on a type of overcomplete graph-based dictionary. The dictionary is built from graph filters and has demonstrated excellent sparse representations for graph signals. We recognize the proposed sampling problem as a coupling between support recovery of sparse signals and node selection. Thus, to approach the problem we propose a sampling procedure that alternates between these two. The former estimates the sparse support via orthogonal matching pursuit (OMP), which in turn enables the latter to build the sampling set selection through greedy algorithms. Numerical results corroborate the role of key parameters and the effectiveness of the proposed method.","Compressive sensing; Graph signal processing; Graph signal sampling; Signal reconstruction; Sparse sensing","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Signal Processing Systems","","",""
"uuid:2ead5387-97aa-4ecd-839b-21704388cad6","http://resolver.tudelft.nl/uuid:2ead5387-97aa-4ecd-839b-21704388cad6","Radar Resource Management for Multi-Target Tracking Using Model Predictive Control","de Boer, Thies (Student TU Delft); Schöpe, M.I. (TU Delft Microwave Sensing, Signals & Systems); Driessen, J.N. (TU Delft Microwave Sensing, Signals & Systems)","","2021","The radar resource management problem in a multi-target tracking scenario is considered. Partially observable Markov decision processes (POMDPs) are used to describe each tracking task. Model predictive control is applied to solve the POMDPs in a non-myopic way. As a result, the computational complexity compared to stochastic optimization methods such as policy rollout is dramatically reduced while the resource allocation results maintain similar. This is shown through simulations of dynamic multi-target tracking scenarios in which the cost and computational complexity of different approaches are compared.","Radar Resource Management; Constrained Optimization; Partially Observable Markov Decision Process; Model Predictive Control","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-06-02","","","Microwave Sensing, Signals & Systems","","",""
"uuid:cbe99628-e99f-4ac3-8b22-fcdf741c46c3","http://resolver.tudelft.nl/uuid:cbe99628-e99f-4ac3-8b22-fcdf741c46c3","The effect of deep learning reconstruction on abdominal CT densitometry and image quality: a systematic review and meta-analysis","van Stiphout, J.A. (TU Delft Science Centre & Programmering); Driessen, J. (Student TU Delft); Koetzier, L.R. (Student TU Delft); Ruules, L.B. (TU Delft Teaching & Learning Services); Willemink, Martin (Stanford University School of Medicine); Heemskerk, Jan W.T. (Leiden University Medical Center); van der Molen, Aart J. (Leiden University Medical Center)","","2021","Objective: To determine the difference in CT values and image quality of abdominal CT images reconstructed by filtered back-projection (FBP), hybrid iterative reconstruction (IR), and deep learning reconstruction (DLR). Methods: PubMed and Embase were systematically searched for articles regarding CT densitometry in the abdomen and the image reconstruction techniques FBP, hybrid IR, and DLR. Mean differences in CT values between reconstruction techniques were analyzed. A comparison between signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of FBP, hybrid IR, and DLR was made. A comparison of diagnostic confidence between hybrid IR and DLR was made. Results: Sixteen articles were included, six being suitable for meta-analysis. In the liver, the mean difference between hybrid IR and DLR was − 0.633 HU (p = 0.483, SD ± 0.902 HU). In the spleen, the mean difference between hybrid IR and DLR was − 0.099 HU (p = 0.925, SD ± 1.061 HU). In the pancreas, the mean difference between hybrid IR and DLR was − 1.372 HU (p = 0.353, SD ± 1.476 HU). In 14 articles, CNR was described. In all cases, DLR showed a significantly higher CNR. In 9 articles, SNR was described. In all cases but one, DLR showed a significantly higher SNR. In all cases, DLR showed a significantly higher diagnostic confidence. Conclusions: There were no significant differences in CT values reconstructed by FBP, hybrid IR, and DLR in abdominal organs. This shows that these reconstruction techniques are consistent in reconstructing CT values. DLR images showed a significantly higher SNR and CNR, compared to FBP and hybrid IR. Key Points: CT values of abdominal CT images are similar between deep learning reconstruction (DLR), filtered back-projection (FBP), and hybrid iterative reconstruction (IR).DLR results in improved image quality in terms of SNR and CNR compared to FBP and hybrid IR images.DLR can thus be safely implemented in the clinical setting resulting in improved image quality without affecting CT values.","Tomography; x-ray computed; Abdomen; Image processing, computer-assisted; Deep learning","en","journal article","","","","","","","","","","","Science Centre & Programmering","","",""
"uuid:4fe702f7-b54e-4061-badc-441946a341d7","http://resolver.tudelft.nl/uuid:4fe702f7-b54e-4061-badc-441946a341d7","Radar sensing for human healthcare: Challenges and results","Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems); Le Kernec, Julien (University of Glasgow)","","2021","In this paper, radar sensing in the domain of human healthcare is discussed, specifically looking at the typical applications of human activity classification (including fall detection), gait analysis and gait parameters extraction, and vital signs monitoring such as respiration and heartbeat. A brief overview of open research challenges and trends in this domain are provided, showing that radar sensors and sensing can play a significant role in the domain of human healthcare.","radar sensing; radar signal processing; machine learning; human activity classification","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-06-17","","","Microwave Sensing, Signals & Systems","","",""
"uuid:15b99bb8-7f60-4dbd-9e7a-53b86a115802","http://resolver.tudelft.nl/uuid:15b99bb8-7f60-4dbd-9e7a-53b86a115802","Timing and Resource-aware Mapping of Quantum Circuits to Superconducting Processors","Lao, L. (TU Delft Computer Engineering; University College London (UCL)); van Someren, J. (TU Delft QCD/Vandersypen Lab); Ashraf, I. (TU Delft QCD/Almudever Lab; TU Delft Computer Engineering; HITEC University); Almudever, Carmen G. (TU Delft QCD/Almudever Lab)","","2021","Quantum algorithms need to be compiled to respect the constraints imposed by quantum processors, which is known as the mapping problem. The mapping procedure will result in an increase of the number of gates and of the circuit latency, decreasing the algorithm's success rate. It is crucial to minimize mapping overhead, especially for noisy intermediate-scale quantum (NISQ) processors that have relatively short qubit coherence times and high gate error rates. Most of prior mapping algorithms have only considered constraints, such as the primitive gate set and qubit connectivity, but the actual gate duration and the restrictions imposed by the use of shared classical control electronics have not been taken into account. In this article, we present a mapper called Qmap to make quantum circuits executable on scalable processors with the objective of achieving the shortest circuit latency. In particular, we propose an approach to formulate the classical control restrictions as resource constraints in a conventional list scheduler with polynomial complexity. Furthermore, we implement a routing heuristic to cope with the connectivity limitation. This router finds a set of movement operations that minimally extends circuit latency. To analyze the mapping overhead and evaluate the performance of different mappers, we map 56 quantum benchmarks onto a superconducting processor named Surface-17. Compared to a prior mapping strategy that minimizes the number of operations, Qmap can reduce the latency overhead (LtyOH) up to 47.3% and operation overhead up to 28.6%, respectively.","Logic gates; Parallel processing; Program processors; Quantum circuit; quantum compilation; Quantum computing; Qubit; resource-constrained scheduling; routing.; Surface treatment","en","journal article","","","","","","","","","","","Computer Engineering","","",""
"uuid:b8e70293-c6ba-4a54-9193-89df7436cc3e","http://resolver.tudelft.nl/uuid:b8e70293-c6ba-4a54-9193-89df7436cc3e","Counting people in the crowd using social media images for crowd management in city events","Gong, X. (TU Delft Transport and Planning); Daamen, W. (TU Delft Transport and Planning); Bozzon, A. (TU Delft Human-Centred Artificial Intelligence); Hoogendoorn, S.P. (TU Delft Transport and Planning)","","2021","City events are getting popular and are attracting a large number of people. This increase needs for methods and tools to provide stakeholders with crowd size information for crowd management purposes. Previous works proposed a large number of methods to count the crowd using different data in various contexts, but no methods proposed using social media images in city events and no datasets exist to evaluate the effectiveness of these methods. In this study we investigate how social media images can be used to estimate the crowd size in city events. We construct a social media dataset, compare the effectiveness of face recognition, object recognition, and cascaded methods for crowd size estimation, and investigate the impact of image characteristics on the performance of selected methods. Results show that object recognition based methods, reach the highest accuracy in estimating the crowd size using social media images in city events. We also found that face recognition and object recognition methods are more suitable to estimate the crowd size for social media images which are taken in parallel view, with selfies covering people in full face and in which the persons in the background have the same distance to the camera. However, cascaded methods are more suitable for images taken from top view with gatherings distributed in gradient. The created social media dataset is essential for selecting image characteristics and evaluating the accuracy of people counting methods in an urban event context.","Crowd size estimation; Face recognition; Image processing; Input for crowd management; Social media analysis","en","journal article","","","","","","","","","","Transport and Planning","Transport and Planning","","",""
"uuid:c5d10fbd-59bb-45b5-8080-994c0fc3316e","http://resolver.tudelft.nl/uuid:c5d10fbd-59bb-45b5-8080-994c0fc3316e","Graphon Filters: Graph Signal Processing in the Limit","Morency, M.W. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2021","Graph signal processing is an emerging field which aims to model processes that exist on the nodes of a network and are explained through diffusion over this structure. Graph signal processing works have heretofore assumed knowledge of the graph shift operator. Our approach is to investigate the question of graph filtering on a graph about which we only know a model. To do this we leverage the theory of graphons proposed by L. Lovasz and B. Szegedy. We make three key contributions to the emerging field of graph signal processing. We show first that filters defined over the scaled adjacency matrix of a random graph drawn from a graphon converge to filters defined over the Fredholm integral operator with the graphon as its kernel. Second, leveraging classical findings from the theory of the numerical solution of Fredholm integral equations, we define the Fourier-Galerkin shift operator. Lastly, using the Fourier-Galerkin shift operator, we derive a graph filter design algorithm which only depends on the graphon, and thus depends only on the probabilistic structure of the graph instead of the particular graph itself. The derived graphon filtering algorithm is verified through simulations on a variety of random graph models.","Graph signal processing; graph filter design; graphons; random graphs","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public","","2021-08-24","","","Signal Processing Systems","","",""
"uuid:07c00a9b-c9d6-4f36-96a1-7efb84b49831","http://resolver.tudelft.nl/uuid:07c00a9b-c9d6-4f36-96a1-7efb84b49831","Supporting interdisciplinary collaborative concept mapping with individual preparation phase","Tan, E.B.K. (TU Delft Web Information Systems; Leiden-Delft-Erasmus Centre for Education and Learning (LDE-CEL)); de Weerd, Jacob Gerolf (NHL University of Applied Science); Stoyanov, Slavi (Open University of the Netherlands)","","2021","Concept mapping facilitates the externalisation and internalisation of knowledge by individuals during collaborative knowledge construction. However, not much is known about the individual and collaborative learning processes during collaborative concept mapping (CCM) in interdisciplinary knowledge construction. Premised on literature on collaboration scripts to scaffold the collaboration process, this study investigates the effect of an individual preparation phase prior to collaborative work on the epistemic and social processes of knowledge co-construction, as well as the degree of interdisciplinary knowledge integration in collaborative concept mapping. A total of N = 42 third year university students were put into one of the two experimental conditions: with individual preparation phase (WIP) and without individual preparation phase (WOIP). Students worked on a collaborative assignment to integrate interdisciplinary knowledge in collaborative concept mapping. Data for analysis was derived from audio recordings of the collaborative discourse in both experimental conditions. Chi-square test was conducted to investigate if there were significant differences between the effects of WIP and WOIP on the epistemological and social dimension. Findings showed that groups in the WIP condition showed significantly more verification, clarification and positioning statements in the epistemic dimension and also significantly more integration-oriented and conflict-oriented consensus building in the social dimension as compared to groups in the WOIP condition. On the degree of interdisciplinary knowledge integration, independent sample t-tests showed that there was no significant difference for concepts, domains and cross-links between the two experimental conditions. However, there was significant difference in types of cross-links for the CCMs in the WIP condition.","Collaborative concept map; Common ground; Epistemic processes; Interdisciplinary knowledge integration; Knowledge co-construction; Social processes","en","journal article","","","","","","","","","","","Web Information Systems","","",""
"uuid:c655ba14-0669-49d0-aadb-7649ca427e5a","http://resolver.tudelft.nl/uuid:c655ba14-0669-49d0-aadb-7649ca427e5a","Building with Nature as a cross-disciplinary approach: The role of hybrid contributions","van Bergen, J. (TU Delft Landscape Architecture); Nijhuis, S. (TU Delft Landscape Architecture); Brand, A.D. (TU Delft Projects); Hertogh, M.J.C.M. (TU Delft Integral Design & Management)","","2021","The incentive for this publication was to expand the realm of enquiry around the topic of Building with Nature (BwN), for two main reasons. First to gain an interdisciplinary, and therefore deeper, understanding of BwN as an object of study. Secondly, but no less important, is an understanding of how different forms of knowledge contribute to our learning regarding BwN. When we understand the contribution of several academic disciplines and knowledge from practice, we may eventually get to the point where we can identify how they can collaborate successfully to contribute to BwN as an interdisciplinary field.","Reflection; BwN; Building with Nature; nature-based solutions; coastal protection; adaptive planning and design; water managemen; natural processes; delta landscapes; ecological hydraulic engineering","en","journal article","","","","","","Vol. 7 (2021): Building with Nature perspectives: Cross-disciplinary BwN approaches in coastal regions. ISBN 978-94-6366-379-3","","","","","Landscape Architecture","","",""
"uuid:bdfb4f6e-ce37-471c-83aa-eb02cbfa5554","http://resolver.tudelft.nl/uuid:bdfb4f6e-ce37-471c-83aa-eb02cbfa5554","Editorial: Building with Nature perspectives","van Bergen, J. (TU Delft Landscape Architecture); Nijhuis, S. (TU Delft Landscape Architecture); Brand, A.D. (TU Delft Projects); Hertogh, M.J.C.M. (TU Delft Integral Design & Management)","","2021","This publication offers an overview of the latest cross-disciplinary developments in the field of Building with Nature (BwN) for the protection of coastal regions. The key philosophy of BwN is the employment of natural processes to serve societal goals, such as flood safety. The starting point is a systems-based approach, making interventions that employ the shaping forces of the natural system to perform measures by self-regulation. Initial pilots of this innovative approach originate from coastal engineering, with the Sand Motor along the coast of South Holland as one of the prime examples. From here, the BwN approach has evolved into a new generation of nature-based hydraulic solutions, such as mangrove forests, coastal reefs, and green dikes.","BwN; building with nature; Nature-based solutions; coastal protection; adaptive planning and design; Water management; natural processes; delta landscapes; ecological hydraulic engineering","en","contribution to periodical","","","","","","Vol. 7 (2021): Building with Nature perspectives: Cross-disciplinary BwN approaches in coastal regions. ISBN 978-94-6366-379-3","","","","","Landscape Architecture","","",""
"uuid:0f0f3322-4901-4624-aa54-aa80fb30daa2","http://resolver.tudelft.nl/uuid:0f0f3322-4901-4624-aa54-aa80fb30daa2","Extreme industrial effluents: Opportunities for reuse","Spanjers, H. (TU Delft Sanitary Engineering)","Davis, Cheryl (editor); Rosenblum, Eric (editor)","2021","","biological treatment; water reuse; used process water; toxic effluents; saline effluents; physiochemical treatment; industrial water reuse; high temperature; OA-Fund TU Delft","en","book chapter","International Water Association (IWA)","","","","","","","","","","Sanitary Engineering","","",""
"uuid:8f220e48-add5-4063-9c74-8f5b8688789d","http://resolver.tudelft.nl/uuid:8f220e48-add5-4063-9c74-8f5b8688789d","A Distributed Augmented Lagrangian Method over Stochastic Networks for Economic Dispatch of Large-Scale Energy Systems","Ananduta, W. (TU Delft Team Bart De Schutter); Ocampo-Martinez, Carlos (Universitat Politecnica de Catalunya); Nedic, Angelia (Arizona State University)","","2021","In this paper, we propose a distributed model predictive control (MPC) scheme for economic dispatch of energy systems with a large number of active components. The scheme uses a distributed optimization algorithm that works over random communication networks and asynchronous updates, implying the resiliency of the proposed scheme with respect to communication problems, such as link failures, data packet drops, and delays. The distributed optimization algorithm is based on the augmented Lagrangian approach, where the dual of the considered convex economic dispatch problem is solved. Furthermore, in order to improve the convergence speed of the algorithm, we adapt Nesterov's accelerated gradient method and apply the warm start method to initialize the variables. We show through numerical simulations of a well-known case study the performance of the proposed scheme.","Acceleration; Communication networks; Economics; Index terms -economic dispatch; model predictive control; multi-agent optimization; Optimization; Predictive control; Production; Stochastic processes; stochastic time-varying network","en","journal article","","","","","","Accepted Author Manuscript","","","","","Team Bart De Schutter","","",""
"uuid:68f686f7-3058-4615-8e50-ef5574d358f9","http://resolver.tudelft.nl/uuid:68f686f7-3058-4615-8e50-ef5574d358f9","Nonlinear State-Space Generalizations of Graph Convolutional Neural Networks","Ruiz, Luana (University of Pennsylvania); Gama, Fernando (University of California); Ribeiro, Alejandro (University of Pennsylvania); Isufi, E. (TU Delft Multimedia Computing)","","2021","Graph convolutional neural networks (GCNNs) learn compositional representations from network data by nesting linear graph convolutions into nonlinearities. In this work, we approach GCNNs from a state-space perspective revealing that the graph convolutional module is a minimalistic linear state-space model, in which the state update matrix is the graph shift operator. We show that this state update may be problematic because it is nonparametric, and depending on the graph spectrum it may explode or vanish. Therefore, the GCNN has to trade its degrees of freedom between extracting features from data and handling these instabilities. To improve such trade-off, we propose a novel family of nodal aggregation rules that aggregate node features within a layer in a nonlinear state-space parametric fashion allowing for a better trade-off. We develop two architectures within this family inspired by the recurrence with and without nodal gating mechanisms. The proposed solutions generalize the GCNN and provide an additional handle to control the state update and learn from the data. Numerical results on source localization and authorship attribution show the superiority of the nonlinear state-space generalization models over the baseline GCNN.","Graph neural networks; Graph signal processing; Nonlinear systems; State-space models","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Multimedia Computing","","",""
"uuid:761d8624-a79e-4af4-92a9-7585dbe04921","http://resolver.tudelft.nl/uuid:761d8624-a79e-4af4-92a9-7585dbe04921","Topological Volterra Filters","Leus, G.J.T. (TU Delft Signal Processing Systems); Yang, M. (TU Delft Multimedia Computing); Coutino, Mario (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing)","","2021","To deal with high-dimensional data, graph filters have shown their power in both graph signal processing and data science. However, graph filters process signals exploiting only pairwise interactions between the nodes, and they are not able to exploit more complicated topological structures. Graph Volterra models, on the other hand, are also able to exploit relations between triplets, quadruplets and so on. However, they have only been exploited for topology identification and are only based on one-hop relations. In this paper, we first review graph filters and graph Volterra models and then merge the two concepts resulting in so-called topological Volterra filters (TVFs). TVFs process signals over multiple hops of higher-level topological structures. First-level TVFs are basically similar to traditional graph filters, yet higher-level TVFs provide a more general processing framework. We apply TVFs to inverse filtering and recommender systems.","Graph Volterra model; Graph filters; Higherlevel interactions; Graph signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-11-13","","","Signal Processing Systems","","",""
"uuid:d7a3d585-a426-42b7-b9c3-2906420bcd30","http://resolver.tudelft.nl/uuid:d7a3d585-a426-42b7-b9c3-2906420bcd30","Application of ultraviolet, visible, and infrared light imaging in protein-based biopharmaceutical formulation characterization and development studies","Klijn, M.E. (TU Delft BT/Bioprocess Engineering); Hubbuch, Juergen (Karlsruhe Institut für Technologie)","","2021","Imaging is increasingly more utilized as analytical technology in biopharmaceutical formulation research, with applications ranging from subvisible particle characterization to thermal stability screening and residual moisture analysis. This review offers a comprehensive overview of analytical imaging for scientists active in biopharmaceutical formulation research and development, where it presents the unique information provided by the ultraviolet (UV), visible (Vis), and infrared (IR) sections in the electromagnetic spectrum. The main body of this review consists of an outline of UV, Vis, and IR imaging techniques for several (bio)physical properties that are commonly determined during protein-based biopharmaceutical formulation characterization and development studies. The review concludes with a future perspective of applied imaging within the field of biopharmaceutical formulation research.","Computer vision; Formulation development; High-throughput screening; Image processing; Machine learning; Protein analytics","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:01cb117a-339c-42e9-9691-8456a12e3947","http://resolver.tudelft.nl/uuid:01cb117a-339c-42e9-9691-8456a12e3947","Measuring Cybercrime as a Service (CaaS) Offerings in a Cybercrime Forum","Akyazi, U. (TU Delft Organisation & Governance); van Eeten, M.J.G. (TU Delft Organisation & Governance); Hernandez Ganan, C. (TU Delft Organisation & Governance)","","2021","The emergence of Cybercrime-as-a-Service (CaaS) is a critical evolution in the cybercrime landscape. A key area of research on CaaS is where and how the supply of CaaS is being matched with demand. Next to underground marketplaces and custom websites, cybercrime forums provide an important channel for CaaS suppliers to attract customers. Our study presents the first comprehensive and longitudinal analysis of types of CaaS supply and demand on a cybercrime forum. We develop a classifier to identify supply and demand for each type and measure their relative prevalence and apply this to a dataset spanning 11 years of posts on Hack Forums, one of the largest and oldest ongoing English-language cybercrime forum on the surface web. Of 28 known CaaS types, we only found evidence for only 9 of these in the forum.We saw no dramatic shifts in these offerings over time, not even after major underground marketplaces were being seized by law enforcement. Around 16% of first posts of the threads in the ‘Market’ section of the forum offers CaaS, whereas only 3% is focused on product-type criminal offerings. Within the types of CaaS, ‘bot/botnet as a service’, ‘reputation escalation as a service’ and ‘traffic as a service’ categories make up the majority (over 60%) for whole period in terms of both supply and demand. At least half of each CaaS offerings directs potential buyers to an instant messaging app or private message for transacting privately. In sum, we find that forums do in fact provide a channel for CaaS supply and demand to meet, but we see only a fraction of the CaaS landscape and there is no evidence in our data for the supposed growth of CaaS over time. We reflect on the implications of our findings for developing effective disruption strategies by law enforcement.","Cybercrime as a Service; CaaS; Cybercrime Forum; Machine Learning; Natural Language Processing","en","conference paper","","","","","","","","","","","Organisation & Governance","","",""
"uuid:efaee15c-2492-4420-a25a-23e9ea9248e2","http://resolver.tudelft.nl/uuid:efaee15c-2492-4420-a25a-23e9ea9248e2","The effect of addition of hardystonite on the strength, ductility and corrosion resistance of WE43 magnesium alloy","Eivani, A.R. (Iran University of Science and Technology); Tabatabaei, F. (Iran University of Science and Technology); Khavandi, A. R. (Iran University of Science and Technology); Tajabadi, M. (Iran University of Science and Technology); Mehdizade, M. (Iran University of Science and Technology); Jafarian, H. R. (Iran University of Science and Technology); Zhou, J. (TU Delft Biomaterials & Tissue Biomechanics)","","2021","A composite material based on the WE43 magnesium alloy and containing nano-sized hardystonite ceramic particles was processed by means of friction stir processing (FSP). Compressive strength and strain-at-failure of the WE43 alloy increased as a combined result of FSP and nanoparticle reinforcement. The results of potentiondynamic polarization and electrochemical impedance spectroscopy tests indicated that the corrosion mechanism of the nanocomposite is combination of uniform corrosion and localized pitting corrosion which is not different from the base metal. However, the corrosion rate is significantly decreased as a result of reduced localized corrosion of the base metal after FSP and the effect of hardystonite to reduce pitting corrosion. The polarization resistance is increased from 192.48 to 339.61 and 1318.12 Ω/cm2 by applying FSP on WE43 and addition of nano-sized hardystonite particles, respectively. Indeed, the fabricated nanocomposite shows significantly increased corrosion resistance. Enhanced strength, ductility and corrosion resistance were attributed to grain refinement in addition to the fragmentation and redistribution of second-phase particles in the magnesium matrix, occurring during FSP.","Corrosion properties; Friction stir processing; Hardystonite; Magnesium; Nanocomposite","en","journal article","","","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:aecd5a3d-d429-4433-9fe3-13080316fd05","http://resolver.tudelft.nl/uuid:aecd5a3d-d429-4433-9fe3-13080316fd05","Learning Optimal Controllers for Linear Systems with Multiplicative Noise via Policy Gradient","Gravell, Benjamin (University of Texas at Dallas); Mohajerin Esfahani, P. (TU Delft Team Bart De Schutter); Summers, Tyler H. (University of Texas at Dallas)","","2021","The linear quadratic regulator (LQR) problem has reemerged as an important theoretical benchmark for reinforcement learning-based control of complex dynamical systems with continuous state and action spaces. In contrast with nearly all recent work in this area, we consider multiplicative noise models, which are increasingly relevant because they explicitly incorporate inherent uncertainty and variation in the system dynamics and thereby improve robustness properties of the controller. Robustness is a critical and poorly understood issue in reinforcement learning; existing methods which do not account for uncertainty can converge to fragile policies or fail to converge at all. Additionally, intentional injection of multiplicative noise into learning algorithms can enhance robustness of policies, as observed in ad hoc work on domain randomization. Although policy gradient algorithms require optimization of a non-convex cost function, we show that the multiplicative noise LQR cost has a special property called gradient domination, which is exploited to prove global convergence of policy gradient algorithms to the globally optimum control policy with polynomial dependence on problem parameters. Results are provided both in the model-known and model-unknown settings where samples of system trajectories are used to estimate policy gradients.","Additive noise; Convergence; Covariance matrices; gradient methods; noise; optimal control; Reinforcement learning; Robustness; Stability analysis; Stochastic processes; stochastic systems; uncertain systems; Uncertainty","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-05-10","","","Team Bart De Schutter","","",""
"uuid:37bad4b1-09ef-4386-bb56-5e52c18caa3a","http://resolver.tudelft.nl/uuid:37bad4b1-09ef-4386-bb56-5e52c18caa3a","A hybrid approach to structural modeling of individualized HRTFs","Miccini, Riccardo (Aalborg University); Spagnol, S. (TU Delft Design Aesthetics)","","2021","We present a hybrid approach to individualized head-related transfer function (HRTF) modeling which requires only 3 anthropometric measurements and an image of the pinna. A prediction algorithm based on variational autoencoders synthesizes a pinna-related response from the image, which is used to filter a measured head-andtorso response. The interaural time difference is then manipulated to match that of the HUTUBS dataset subject minimizing the predicted localization error. The results are evaluated using spectral distortion and an auditory localization model. While the latter is inconclusive regarding the efficacy of the structural model, the former metric shows promising results with encoding HRTFs. Index Terms: Hardware - Digital signal processing; Computing methodologies - Neural networks; Applied computing - Sound and music computing","Digital signal processing; Neural networks; Sound and music computing","en","conference paper","IEEE","","","","","","","","","","Design Aesthetics","","",""
"uuid:2a84fcef-56e7-43f8-b812-a790e74d2026","http://resolver.tudelft.nl/uuid:2a84fcef-56e7-43f8-b812-a790e74d2026","What makes a good driver on public roads and race tracks? An interview study","Doubek, F.H. (TU Delft Intelligent Vehicles; Dr. Ing. h.c. F. Porsche AG); Salzmann, Falk (Technische Universität Dresden); de Winter, J.C.F. (TU Delft Human-Robot Interaction)","","2021","Future vehicles may drive automatically in a human-like manner or contain systems that monitor human driving ability. Algorithms of these systems must have knowledge of criteria of good and safe driving behavior with regard to different driving styles. In the current study, interviews were conducted with 30 drivers, including driving instructors, engineers, and race drivers. The participants were asked to describe good driving on public roads and race tracks, and in some questions were supported with video material. The results were interpreted with the help of Endsley's model of situation awareness. The interviews showed that there were clear differences between what was considered good driving on the race track and good driving on the public road, where for the former, the driver must touch the limit of the vehicle, whereas, for the latter, the limit should be avoided. However, in both cases, a good driver was characterized by self-confidence, lack of stress, and not being aggressive. Furthermore, it was mentioned that the driver's posture and viewing behavior are essential components of good driving, which affect the driver's prediction of events and execution of maneuvers. The implications of our findings for the development of automation technology are discussed. In particular, we see potential in driver posture estimation and argue that automated vehicles excel in perception but may have difficulty making predictions.","Driver assessment; Information processing; Posture; Racing; Situation awareness","en","journal article","","","","","","","","","","","Intelligent Vehicles","","",""
"uuid:37c1595e-53dc-4464-a792-04332d12ab6e","http://resolver.tudelft.nl/uuid:37c1595e-53dc-4464-a792-04332d12ab6e","High Frame Rate Volumetric Imaging of Microbubbles Using a Sparse Array and Spatial Coherence Beamforming","Wei, Luxi (Erasmus MC); Wahyulaksana, G. (Erasmus MC); Meijlink, Bram (Erasmus MC); Ramalli, Alessandro (University of Florence); Noothout, E.C. (TU Delft ImPhys/Medical Imaging); Verweij, M.D. (TU Delft ImPhys/Medical Imaging); van der Steen, A.F.W. (TU Delft ImPhys/Medical Imaging; Erasmus MC); de Jong, N. (TU Delft ImPhys/Medical Imaging; Erasmus MC); Vos, H.J. (TU Delft ImPhys/Medical Imaging; Erasmus MC)","","2021","Volumetric ultrasound imaging of blood flow with microbubbles enables a more complete visualization of the microvasculature. Sparse arrays are ideal candidates to perform volumetric imaging at reduced manufacturing complexity and cable count. However, due to the small number of transducer elements, sparse arrays often come with high clutter levels, especially when wide beams are transmitted to increase the frame rate. In this study, we demonstrate with a prototype sparse array probe and a diverging wave transmission strategy, that a uniform transmission field can be achieved. With the implementation of a spatial coherence beamformer, the background clutter signal can be effectively suppressed, leading to a signal to background ratio improvement of 25 dB. With this approach, we demonstrate the volumetric visualization of single microbubbles in a tissue-mimicking phantom as well as vasculature mapping in a live chicken embryo chorioallantoic membrane.","Array signal processing; Clutter; Coherence beamforming; high frame rate; Imaging; microbubbles; Signal to noise ratio; sparse array; Spatial coherence; Spirals; Ultrasonic imaging; volumetric imaging","en","journal article","","","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:b1ba78ac-15c9-4e79-b978-d58c720810b2","http://resolver.tudelft.nl/uuid:b1ba78ac-15c9-4e79-b978-d58c720810b2","Generating Images from Spoken Descriptions","Wang, X. (TU Delft Multimedia Computing; Xi’an Jiaotong University); Qiao, T. (TU Delft Multimedia Computing); Zhu, Jihua (Xi’an Jiaotong University); Hanjalic, A. (TU Delft Intelligent Systems); Scharenborg, O.E. (TU Delft Multimedia Computing)","","2021","Text-based technologies, such as text translation from one language to another, and image captioning, are gaining popularity. However, approximately half of the world's languages are estimated to be lacking a commonly used written form. Consequently, these languages cannot benefit from text-based technologies. This paper presents 1) a new speech technology task, i.e., a speech-to-image generation (S2IG) framework which translates speech descriptions to photo-realistic images 2) without using any text information, thus allowing unwritten languages to potentially benefit from this technology. The proposed speech-to-image framework, referred to as S2IGAN, consists of a speech embedding network and a relation-supervised densely-stacked generative model. The speech embedding network learns speech embeddings with the supervision of corresponding visual information from images. The relation-supervised densely-stacked generative model synthesizes images, conditioned on the speech embeddings produced by the speech embedding network, that are semantically consistent with the corresponding spoken descriptions. Extensive experiments are conducted on four public benchmark databases: two databases that are commonly used in text-to-image generation tasks, i.e., CUB-200 and Oxford-102 for which we created synthesized speech descriptions, and two databases with natural speech descriptions which are often used in the field of cross-modal learning of speech and images, i.e., Flickr8k and Places. Results on these databases demonstrate the effectiveness of the proposed S2IGAN on synthesizing high-quality and semantically-consistent images from the speech signal, yielding a good performance and a solid baseline for the S2IG task.","adversarial learning; Birds; Databases; Electronic mail; Image synthesis; multimodal modelling; Semantics; speech embedding; Speech processing; Speech-to-image generation; Task analysis; speech-to-image generation; Adversarial learning","en","journal article","","","","","","","","","","Intelligent Systems","Multimedia Computing","","",""
"uuid:2ba4c7fc-b5f4-45a5-8f83-d995cf942da4","http://resolver.tudelft.nl/uuid:2ba4c7fc-b5f4-45a5-8f83-d995cf942da4","Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party Interactions","Oertel, Catharine (TU Delft Interactive Intelligence); Jonell, Patrik (KTH Royal Institute of Technology); Kontogiorgos, Dimosthenis (KTH Royal Institute of Technology); Mora, Kenneth Funes (Eyeware Tech SA); Odobez, Jean Marc (Idiap Research Institute); Gustafson, Joakim (KTH Royal Institute of Technology)","","2021","Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.","artificial listener; eye-gaze patterns; head gestures; human-robot interaction; multi-party interactions; non-verbal behaviors; social signal processing","en","journal article","","","","","","","","","","","Interactive Intelligence","","",""
"uuid:2caa7502-0e98-4a0a-adbc-0af2be51667b","http://resolver.tudelft.nl/uuid:2caa7502-0e98-4a0a-adbc-0af2be51667b","Homeowners’ Participation in Energy Efficient Renovation Projects in China’s Northern Heating Region","Ma, J. (TU Delft Housing Quality and Process Innovation); Qian, QK (TU Delft Housing Quality and Process Innovation); Visscher, H.J. (TU Delft Housing Quality and Process Innovation); Song, Kun (Tianjin University)","","2021","In China’s government-led energy efficient renovation of residential buildings, homeowners’ participation refers to their involvement and engagement throughout the process. Lacking homeowners’ participation has brought difficulties in the execution and financing of the projects. This paper explores the current situation of homeowners’ participation and provides suggestions for optimization from three perspectives: the steps and procedures of the participation process, the composition of the working group responsible for contacting the homeowners, and the contents to be discussed during the process. The semi-structured interview and questionnaire results show that homeowners’ participation is not adequate, and the current arrangement deviates from their expectations. Although most homeowners are positive towards government-led renovation and are enthusiastic about being involved, the process setup is not well-designed to let them fully participate. Moreover, their expectations and preferences are related to several factors. It can be concluded that relevant laws and regulations should be introduced to provide a basis for solving problems at the executive level, and homeowner associations should be established to serve as a channel of communication between homeowners and the working group. Designing targeted renovation and participation strategy is a necessity to minimize the communication efforts","Energy efficient renovation; Existing housing stock; Participation; Renovation process","en","journal article","","","","","","","","","","","Housing Quality and Process Innovation","","",""
"uuid:2b4aa3ef-af93-4d40-948a-26cb3c24b96b","http://resolver.tudelft.nl/uuid:2b4aa3ef-af93-4d40-948a-26cb3c24b96b","Predicting major accidents in the process industry based on the barrier status at scenario level: A practical approach","Schmitz, P.J.H. (TU Delft Safety and Security Science; OCI-Nitrogen); Swuste, P.H.J.J. (TU Delft Safety and Security Science); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); van Nunen, K.L.L. (TU Delft Safety and Security Science; Universiteit Antwerpen)","","2021","OCI Nitrogen wants to gain knowledge of (leading) indicators regarding the process safety performance of their ammonia production process. This paper answers the question whether indicators can be derived from the barrier system status to provide information about the development and likelihood of the major accident processes in the ammonia production process. The accident processes are visualized as scenarios in bowties. This research focuses on the status of the preventive barriers on the left-hand side of the bowtie. Both the quality – expressed in reliability/availability and effectiveness – and the activation of the barrier system give an indication of the development of the accident scenarios and the likelihood of the central event. This likelihood is calculated as a loss of risk reduction compared to the original design. The calculation results in an indicator called “preventive barrier indicator”, which should initiate further action. Based on an example, it is demonstrated which actions should be taken and what their urgency is.","Ammonia; Barrier; Bowtie; Indicator; Process safety; Scenario","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:df4cdad8-083d-49d9-a79b-7653b8de9ed7","http://resolver.tudelft.nl/uuid:df4cdad8-083d-49d9-a79b-7653b8de9ed7","The life cycle of creative ideas: Towards a dual-process theory of ideation","Gonçalves, M. (TU Delft Methodologie en Organisatie van Design); Cash, Philip (Technical University of Denmark)","","2021","Ideation is simultaneously one of the most investigated and most intriguing aspects of design. The reasons for this attention are partly due to its importance in design and innovation, and partly due to an array of conflicting results and explanations. In this study, we develop an integrative perspective on individual ideation by combining cognitive and process-based views via dual-process theory. We present a protocol and network analysis of 31 ideation sessions, based on novice designers working individually, revealing the emergence of eight idea archetypes and a number of process features. Based on this, we propose the Dual-Process Ideation (DPI) Model, which links idea creation and idea judgement. This explains a number of previously contradictory results and offers testable predictive power.","conceptual design; creativity; design cognition; design process(es); dual-process theory","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:ce9479b3-4c5b-4d74-9928-9d71515c1cae","http://resolver.tudelft.nl/uuid:ce9479b3-4c5b-4d74-9928-9d71515c1cae","Automated seismic acquisition geometry design for optimized illumination at the target: a linearized approach","Wu, S. (TU Delft ImPhys/Computational Imaging; TU Delft ImPhys/Medical Imaging); Verschuur, D.J. (TU Delft ImPhys/Computational Imaging); Blacquière, G. (TU Delft Geoscience and Engineering; TU Delft Applied Geophysics and Petrophysics)","","2021","In seismic exploration methods, imperfect spatial sampling at the surface causes a lack of illumination at the target in the subsurface. The hampered image quality at the target area of interest causes uncertainties in reservoir monitoring and production, which can have a substantial economic impact. Especially in the case of a complex overburden, the impact of surface sampling on target illumination can be significant. The target-oriented acquisition analysis based on wavefield propagation and a known velocity model has been used to provide guidance for optimizing the acquisition parameters. Seismic acquisition design is usually a manual optimization process, with consideration of many aspects. In this study, we develop a methodology that automatically optimizes an irregular receiver geometry when the source geometry is fixed or vice versa. The methodology includes objective functions defined by two criteria: optimizing the image resolution and optimizing the angle-dependent illumination information. We use a two-step parameterization in order to make the problem more linear and, thereby, solve the acquisition design problem by using a gradient descent algorithm. With simple and complex velocity models, we demonstrate that the proposed method is effective, while the involved computational cost is acceptable. Interestingly, the optimization results in our examples show that the conventional uniform geometry already satisfies the resolution requirement, while optimizing for angle coverage can provide a large uplift and is strongly dependent on the velocity model.","Acoustic beams; Analytical models; Computational modeling; computational seismology; controlled source seismology; Geometry; image processing; inverse theory; Lighting; Mathematical models; Receivers; seismic instruments","en","journal article","","","","","","","","","","Geoscience and Engineering","ImPhys/Computational Imaging","","",""
"uuid:54ca149e-9f5d-43d6-a780-170e84dfe527","http://resolver.tudelft.nl/uuid:54ca149e-9f5d-43d6-a780-170e84dfe527","A spatially resolved model for pressure filtration of edible fat slurries","van den Akker, H.E.A. (TU Delft ChemE/Transport Phenomena; University of Limerick); Hazelhoff Heeres, Doedo P. (Student TU Delft); Kloek, William (FrieslandCampina)","","2021","A spatially resolved one dimensional pressure filtration model was developed for a slurry of edible fat crystals. The model focuses on the expression step in which a cake is compressed to force the liquid through a filter cloth. The model describes the local oil flow in the shrinking cake modeled as a porous nonlinear elastic medium existing of two phases, viz. porous aggregates and interaggregate liquid. Conservation equations lead to a set of two differential equations (vs. time and vs. a material coordinate ω) for two void ratios, which are solved numerically by exploiting a finite-difference scheme. A simulation with this model results in a spatially resolved cake composition and in the outflow velocity, both as a function of time, as well as the final solid fat contents of the cake. Simulation results for various filtration conditions are compared with experimental data collected in a pilot-plant scale filter press.","cake consolidation; fat agglomerates; food processing; numerical simulation; pressure filtration","en","journal article","","","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:8b901891-c268-4c9c-8950-7dd7e0473b88","http://resolver.tudelft.nl/uuid:8b901891-c268-4c9c-8950-7dd7e0473b88","State of Conservation of Concrete Heritage Buildings: A European Screening","Pardo Redondo, G. (TU Delft Heritage & Technology); Franco, Giovanna (University of Genoa); Georgiou, Antroula (University of Cyprus); Ioannou, Ioannis (University of Cyprus); Lubelli, B. (TU Delft Heritage & Technology); Musso, Stefano F. (University of Genoa); Naldini, S. (TU Delft Heritage & Technology); Nunes, Cristiana (Czech Academy of Sciences); Vecchiattini, Rita (University of Genoa)","","2021","Historic concrete buildings are at risk. Limited knowledge of concrete technology until the 1960s led to more sensitive buildings than modern concrete buildings. In addition, the lack of sensibility regarding their heritage value and insufficient protection is leading to remorseless demolition. Still, concrete has proved to be a resilient material that can last over a century with proper care. There is not yet an estimation of the status of historic concrete buildings in Europe. Until now, a few attempts have been done to secondarily, and subjectively, gauge their conservation status. This paper is the result of a joint investigation studying forty-eight historic concrete buildings distributed in four countries. They were surveyed by expert teams according to a predefined methodology. The study aims to identify recurrent damages and parameters affecting the conservation state. It also aims to serve as the first trial for an objective and measurable methodology, to apply it with a statistically significant number of cases. Damages related to the corrosion of reinforcement and moisture-related processes were the most recurrent. The use of plasters, flat roofs, and structural façade walls show a positive effect in protecting the concrete. The state of conservation has a great variability across countries.","Assessment; Concrete; Damage processes; Decay patterns; Historic structures","en","journal article","","","","","","","","","","","Heritage & Technology","","",""
"uuid:9a31df1d-0195-4b25-acd0-c5683808f611","http://resolver.tudelft.nl/uuid:9a31df1d-0195-4b25-acd0-c5683808f611","Improved iGAL 2.0 Metric Empowers Pharmaceutical Scientists to Make Meaningful Contributions to United Nations Sustainable Development Goal 12","Roschangar, Frank (Boehringer Ingelheim Pharma GmbH and Co. KG); Li, Jun (Bristol-Myers Squibb); Zhou, Yanyan (California State University East Bay); Aelterman, Wim (Janssen Pharmaceutica Campus); Borovika, Alina (Bristol-Myers Squibb); Colberg, Juan (Pfizer Inc.); Dickson, David P. (Teva Pharmaceuticals); Gallou, Fabrice (Novartis Pharma); Sheldon, R.A. (TU Delft BT/Biocatalysis; University of Witwatersrand)","","2021","The large and steadily growing demand for medicines combined with their inherent resource-intensive manufacturing necessitates a relentless push for their sustainable production. Pharmaceutical companies are constantly seeking to perform reliable life cycle assessments of their medicinal products and assess the true value of their sustainable development achievements; however, they find themselves impeded by the lack of a universal metric system that allows for objective quantification of the underlying core denominators. Guided by the unambivalent purpose of the United Nations Sustainable Development Goal 12, which aims at substantially reducing production waste by 2030, and driven by a vision to catalyze greener active pharmaceutical ingredient (API) manufacturing around the globe, the authors set out to overcome current obstacles by defining an improved model for the metric named innovation green aspiration level, iGAL 2.0. We propose yield and convergence as new key sustainability indicators and include a new formula for convergence with potential applicability in computer assisted synthesis planning (CASP) algorithms. The improved statistical model of iGAL 2.0 represents a valuable extension to the common API process waste metrics, process mass intensity (PMI) and complete E factor (cEF), by putting those measures into perspective: iGAL 2.0 enables determination of relative process greenness (RPG) to identify potentially underperforming and environmentally concerning processes early and thereby deliver environmental value. At the same time, iGAL 2.0 generates economic value since reduced waste correlates to lower API production costs. The metric is complemented by its scorecard companion to highlight the impact of innovation on reductions of API manufacturing waste, enabling scientists to readily communicate the value of their work to their peers, managers, and the general public. We believe that iGAL 2.0 can readily be adopted by pharmaceutical firms around the globe and thereby empower and inspire their scientists to make meaningful and significant contributions to global sustainability.","Active pharmaceutical ingredient (API); Complete E factor (cEF); Computer assisted synthesis planning (CASP); Convergence; Green chemistry metrics; Innovation green aspiration level (iGAL); Life cycle assessment (LCA); Relative process greenness (RPG); Scorecard; United Nations Sustainable Development Goal (UN SDG)","en","journal article","","","","","","","","","","","BT/Biocatalysis","","",""
"uuid:c6e140f2-1877-4714-963b-b7cb4baa1743","http://resolver.tudelft.nl/uuid:c6e140f2-1877-4714-963b-b7cb4baa1743","Monolithic integration of a smart temperature sensor on a modular silicon-based organ-on-a-chip device","Martins Da Ponte, R. (TU Delft Bio-Electronics); Gaio, N. (TU Delft Electronic Components, Technology and Materials; BIOND Solutions B.V.); van Zeijl, H.W. (TU Delft Electronic Components, Technology and Materials); Vollebregt, S. (TU Delft Electronic Components, Technology and Materials); Dijkstra, Paul (Philips Innovation Services); Dekker, R. (TU Delft Electronic Components, Technology and Materials; Philips Research); Serdijn, W.A. (TU Delft Bio-Electronics); Giagka, Vasiliki (TU Delft Bio-Electronics; Fraunhofer Institute for Reliability and Microintegration IZM)","","2021","One of the many applications of organ-on-a-chip (OOC) technology is the study of biological processes in human induced pluripotent stem cells (iPSCs) during pharmacological drug screening. It is of paramount importance to construct OOCs equipped with highly compact in situ sensors that can accurately monitor, in real time, the extracellular fluid environment and anticipate any vital physiological changes of the culture. In this paper, we report the co-fabrication of a CMOS smart sensor on the same substrate as our silicon-based OOC for real-time in situ temperature measurement of the cell culture. The proposed CMOS circuit is developed to provide the first monolithically integrated in situ smart temperature-sensing system on a micromachined silicon-based OOC device. Measurement results on wafer reveal a resolution of less than ±0.2 °C and a nonlinearity error of less than 0.05% across a temperature range from 30 to 40 °C. The sensor's time response is more than 10 times faster than the time constant of the convection-cooling mechanism found for a medium containing 0.4 ml of PBS solution. All in all, this work is the first step towards realizing OOCs with seamless integrated CMOS-based sensors capable to measure, in real time, multiple physical quantities found in cell culture experiments. It is expected that the use of commercial foundry CMOS processes may enable OOCs with very large scale of multi-sensing integration and actuation in a closed-loop system manner.","CMOS monolithic integration; MEMS; MEMS-electronics co-fabrication; Organs-on-a-chip; Smart temperature sensor; Time-mode domain signal processing","en","journal article","","","","","","","","","","","Bio-Electronics","","",""
"uuid:5caeb971-bd4a-4746-897b-c327e5974f9c","http://resolver.tudelft.nl/uuid:5caeb971-bd4a-4746-897b-c327e5974f9c","Fast and robust identification of railway track stiffness from simple field measurement","Shen, C. (TU Delft Railway Engineering); Dollevoet, R.P.B.J. (TU Delft Railway Engineering); Li, Z. (TU Delft Railway Engineering)","","2021","We propose to combine a physics-based finite element (FE) track model and a data-driven Gaussian process regression (GPR) model to directly infer railpad and ballast stiffness from measured frequency response functions (FRF) by field hammer tests. Conventionally, only the rail resonance and full track resonance are used as the FRF features to identify track stiffness. In this paper, eleven features, including sleeper resonances, from a single FRF curve are selected as the predictors of the GPR. To deal with incomplete measurements and uncertainties in the FRF features, we train multiple candidate GPR models with different features, kernels and training sets. Predictions by the candidate models are fused using a weighted Product of Experts method that automatically filters out unreliable predictions. We compare the performance of the proposed method with a model updating method using the particle swam optimization (PSO) on two synthesis datasets in a wide range of scenarios. The results show that the enriched features and the proposed fusion strategy can effectively reduce prediction errors. In the worst-case scenario with only three features and 5% injected noise, the average prediction errors for the railpad and ballast stiffness are approximately 12% and 6%, outperforming the PSO by about 6% and 3%, respectively. Moreover, the method enables fast predictions for large datasets. The predictions for 400 samples takes only approximately 10 s compared with 40 min using the PSO. Finally, a field application example shows that the proposed method is capable of extracting the stiffness values using a simple setup, i.e., with only one accelerometer and one impact location.","Field hammer test; Frequency response function; Gaussian process regression; Railway track stiffness; Structural identification","en","journal article","","","","","","","","","","","Railway Engineering","","",""
"uuid:e2f84a5b-0297-4a5e-b8c8-68d9717e76f7","http://resolver.tudelft.nl/uuid:e2f84a5b-0297-4a5e-b8c8-68d9717e76f7","Numerical study of molten metal melt pool behaviour during conduction-mode laser spot melting","Ebrahimi, Amin (TU Delft Team Marcel Hermans); Kleijn, C.R. (TU Delft ChemE/Transport Phenomena); Richardson, I.M. (TU Delft Team Marcel Hermans)","","2021","Molten metal melt pools are characterised by highly non-linear responses, which are very sensitive to imposed boundary conditions. Temporal and spatial variations in the energy flux distribution are often neglected in numerical simulations of melt pool behaviour. Additionally, thermo-physical properties of materials are commonly changed to achieve agreement between predicted melt-pool shape and experimental post-solidification macrograph. Focusing on laser spot melting in conduction mode, we investigated the influence of dynamically adjusted energy flux distribution and changing thermo-physical material properties on melt pool oscillatory behaviour using both deformable and non-deformable assumptions for the gas-metal interface. Our results demonstrate that adjusting the absorbed energy flux affects the oscillatory fluid flow behaviour in the melt pool and consequently the predicted melt-pool shape and size. We also show that changing the thermo-physical material properties artificially or using a non-deformable surface assumption lead to significant differences in melt pool oscillatory behaviour compared to the cases in which these assumptions are not made.
N, generated by second order differential operators of (possibly) degenerate type. The operators that we consider need not satisfy the Hörmander Condition (HC). Instead, they satisfy the so-called UFG condition, introduced by Herman, Lobry and Sussman in the context of geometric control theory and later by Kusuoka and Stroock. We demonstrate the importance of the class of UFG processes in several respects: i) we show that UFG processes constitute a family of SDEs which exhibit, in general, multiple invariant measures (i.e. they are in general non-ergodic) and for which one is able to describe a systematic procedure to study the basin of attraction of each invariant measure (equilibrium state). ii) We use an explicit change of coordinates to prove that every UFG diffusion can be, at least locally, represented as a system consisting of an SDE coupled with an ODE, where the ODE evolves independently of the SDE part of the dynamics. iii) As a result, UFG diffusions are inherently “less smooth"" than hypoelliptic SDEs; more precisely, we prove that UFG processes do not admit a density with respect to Lebesgue measure on the entire space, but only on suitable time-evolving submanifolds, which we describe. iv) We show that our results and techniques, which we devised for UFG processes, can be applied to the study of the long-time behaviour of non-autonomous hypoelliptic SDEs and therefore produce several results on this latter class of processes as well. v) Because processes that satisfy the (uniform) parabolic HC are UFG processes, this paper contains a wealth of results about the long time behaviour of (uniformly) hypoelliptic processes which are non-ergodic.","Diffusion semigroups; Distributions with non-constant rank; Hörmander condition; Long time asymptotics; Non-ergodic SDEs; Parabolic PDE; Processes with multiple invariant measures; Stochastic control theory; UFG condition","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:62c55aaf-4ab4-468a-b35a-27a331037965","http://resolver.tudelft.nl/uuid:62c55aaf-4ab4-468a-b35a-27a331037965","Point-based morphological opening with input data retrieval","Balado Frías, J. (TU Delft GIS Technologie; CINTECX); van Oosterom, P.J.M. (TU Delft GIS Technologie); Díaz-Vilarino, L. (TU Delft GIS Technologie; CINTECX); Lorenzo, H. (CINTECX)","","2021","Mathematical morphology is a technique recently applied directly for point cloud data. Its working principle is based on the removal and addition of points from an auxiliary point cloud that acts as a structuring element. However, in certain applications within a more complex process, these changes to the original data represent an unacceptable loss of information. The aim of this work is to provide a modification of the morphological opening to retain original points and attributes. The proposed amendment involved in the morphological opening: erosion followed by dilatation. In morphological erosion, the new eroded points are retained. In morphological dilation, the structuring element does not add its points directly, but uses the point positions to search through the previously eroded points and retrieve them for the dilated point cloud. The modification was tested on synthetic and real data, showing a correct performance at the morphological level, and preserving the precision of the original points and their attributes. Furthermore, the conservation is shown to be very relevant in two possible applications such as traffic sign segmentation and occluded edge detection.","Detection; Image Processing; LiDAR; Mathematical Morphology; Point Cloud Processing; Segmentation","en","journal article","","","","","","","","","","","GIS Technologie","","",""
"uuid:5d18bdbd-ae5f-47f5-961a-85506008a815","http://resolver.tudelft.nl/uuid:5d18bdbd-ae5f-47f5-961a-85506008a815","Characterising the Role of Pre-Processing Parameters in Audio-based Embedded Machine Learning","Hutiri, Wiebke (TU Delft Information and Communication Technology); Mathur, Akhil (Nokia Bell Labs); Ding, Aaron Yi (TU Delft Information and Communication Technology); Kawsar, F. (Nokia Bell Labs)","","2021","When deploying machine learning (ML) models on embedded and IoT devices, performance encompasses more than an accuracy metric: inference latency, energy consumption, and model fairness are necessary to ensure reliable performance under heterogeneous and resource-constrained operating conditions. To this end, prior research has studied model-centric approaches, such as tuning the hyperparameters of the model during training and later applying model compression techniques to tailor the model to the resource needs of an embedded device. In this paper, we take a data-centric view of embedded ML and study the role that pre-processing parameters in the data pipeline can play in balancing the various performance metrics of an embedded ML system. Through an in-depth case study with audio-based keyword spotting (KWS) models, we show that pre-processing parameter tuning is a remarkable tool that model developers can adopt to trade-off between a model's accuracy, fairness, and system efficiency, as well as to make an embedded ML model resilient to unseen deployment conditions.","audio keyword spotting; embedded machine learning; fairness; pre-processing parameters","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Information and Communication Technology","","",""
"uuid:1015fbeb-5d11-4a71-96cb-7a49571687e1","http://resolver.tudelft.nl/uuid:1015fbeb-5d11-4a71-96cb-7a49571687e1","Morphodynamic Equilibria in Double-Inlet Systems: Existence and Stability","Deng, X. (TU Delft Mathematical Physics); Meerman, C. (Universiteit Leiden); Boelens, T. (Universiteit Gent); De Mulder, T. (Universiteit Gent); Salles, P. (Universidad Nacional Autónoma de México); Schuttelaars, H.M. (TU Delft Mathematical Physics)","","2021","The existence of morphodynamic equilibria of double-inlet systems is investigated using a cross-sectionally averaged morphodynamic model. The number of possible equilibria and their stability strongly depend on the forcing conditions and geometry considered. This is illustrated by considering a rectangular double-inlet system forced by M2 tidal constituents only. Depending on the M2 amplitudes and phases at both entrances, no equilibrium, one equilibrium or multiple morphodynamic equilibria may exist. In case no equilibrium is found, the minimum water depth becomes zero somewhere in the system, reducing the double-inlet system to two single-inlet systems. In the other cases, the location of the minimum water depth and the direction of the tidally-averaged sediment transport, as well as their actual values, depend strongly on the M2 tidal characteristics. Such parameter sensitivity is also observed when including the residual and M4 forcing contributions to the water motion, and when allowing for width variations. This suggests that, when considering a specific system, the number and stability of morphodynamic equilibria, as well as the characteristics of these quantities, can only be assessed by investigating that specific system in detail. As an example, the Marsdiep-Vlie inlet system in the Dutch Wadden Sea is considered. It is found that, by using parameter values and a geometry characteristic for this system, the water motion and bathymetry in morphodynamic equilibrium are qualitatively reproduced. Also the direction and order of magnitude of the tidally-averaged suspended sediment transport compare well with those obtained from a high-complexity numerical model.","bifurcations; double inlet systems; morphodynamic equilibria; process-based models; tidal basins","en","journal article","","","","","","","","","","","Mathematical Physics","","",""
"uuid:bd76ab93-1563-4f36-92c5-37a4aaec7e2e","http://resolver.tudelft.nl/uuid:bd76ab93-1563-4f36-92c5-37a4aaec7e2e","Multi-objective evolutionary based feature selection supported by distributed multi-label classification and deep learning on image/video data","Karagoz, G. (TU Delft Data-Intensive Systems)","","2021","We live in an era in which a myriad of computer systems produce immense amounts of (raw) data every day. This big data must be processed efficiently to gain valuable and hidden knowledge. Complex processing pipelines need to be designed for filtering out irrelevant data, also for efficient data mining and machine learning methods must be used to discover useful correlations in the big data. The purpose of this PhD research is the implementation of multi-objective evolutionary-based dimensionality reduction on a high volume of image/video data with the support of distributed multi-label classification algorithms.","big data processing; dimensionality reduction; distributed machine learning; feature engineering; feature extraction","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Data-Intensive Systems","","",""
"uuid:156e4db9-23f4-40d5-a927-e42bcd1536e9","http://resolver.tudelft.nl/uuid:156e4db9-23f4-40d5-a927-e42bcd1536e9","A gravity assist mapping for the circular restricted three-body problem using Gaussian processes","Liu, Y. (TU Delft Astrodynamics & Space Missions); Noomen, R. (TU Delft Astrodynamics & Space Missions); Visser, P.N.A.M. (TU Delft Space Engineering)","","2021","Inspired by the Keplerian Map and the Flyby Map, a Gravity Assist Mapping using Gaussian Process Regression for the fully spatial Circular Restricted Three-Body Problem is developed. A mapping function for quantifying the flyby effects over one orbital period is defined. The Gaussian Process Regression model is established by proper mean and covariance functions. The model learns the dynamics of flyby's from training samples, which are generated by numerical propagation. To improve the efficiency of this method, a new criterion is proposed to determine the optimal size of the training dataset. We discuss its robustness to show the quality of practical usage. The influence of different input elements on the flyby effects is studied. The accuracy and efficiency of the proposed model have been investigated for different energy levels, ranging from representative high- to low-energy cases. It shows improvements over the Kick Map, an independent semi-analytical method available in literature. The accuracy and efficiency of predicting the variation of the semi-major axis are improved by factors of 3.3, and 1.27×104, respectively.","Gaussian process regression; Gravity assist mapping; Machine learning","en","journal article","","","","","","","","","","Space Engineering","Astrodynamics & Space Missions","","",""
"uuid:e92f0e55-98c9-4086-b6c2-7ef25ef52cda","http://resolver.tudelft.nl/uuid:e92f0e55-98c9-4086-b6c2-7ef25ef52cda","Run-and-Tumble Motion: The Role of Reversibility","van Ginkel, G.J. (TU Delft Applied Probability); van Gisbergen, Bart (Student TU Delft); Redig, F.H.J. (TU Delft Applied Probability)","","2021","We study a model of active particles that perform a simple random walk and on top of that have a preferred direction determined by an internal state which is modelled by a stationary Markov process. First we calculate the limiting diffusion coefficient. Then we show that the ‘active part’ of the diffusion coefficient is in some sense maximal for reversible state processes. Further, we obtain a large deviations principle for the active particle in terms of the large deviations rate function of the empirical process corresponding to the state process. Again we show that the rate function and free energy function are (pointwise) optimal for reversible state processes. Finally, we show that in the case with two states, the Fourier–Laplace transform of the distribution, the moment generating function and the free energy function can be computed explicitly. Along the way we provide several examples.","Active particle; Diffusion coefficient; Large deviations; Random walk; Reversibility; Run-and-tumble motion; Stochastic processes","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:48fe8e0c-7a19-42f9-9f69-485c712b3eda","http://resolver.tudelft.nl/uuid:48fe8e0c-7a19-42f9-9f69-485c712b3eda","Degradation modeling considering unit-to-unit heterogeneity-A general model and comparative study","Wang, Zhijie (Shanghai University); Zhai, Qingqing (Shanghai University); Chen, P. (TU Delft Delft Institute of Applied Mathematics; TU Delft Statistics)","","2021","The performance of units in the same batch can exhibit considerable heterogeneity due to the variation in the raw materials and fluctuation in the manufacturing process. For products suffering performance degradation in their use, such heterogeneity often results in an increase in the dispersion of the degradation paths of units in a population. The degradation rate of products can be unit-specific and often treated as random effects. This paper develops a novel random-effects Wiener process model to account for the unit-to-unit heterogeneity in the degradation, where the generalized inverse Gaussian (GIG) distribution is used to model the unit-specific degradation rate. The GIG distribution is a very general distribution with broad applications, which includes the inverse Gaussian (IG) distribution and the Gamma distribution as special cases. We investigate the model properties and develop an expectation maximization (EM) algorithm for parameter estimation. By comparing the proposed model with existing models on two real degradation datasets of the infrared LEDs and the GaAs lasers, we show that the proposed model is quite effective for degradation modeling with heterogeneous rates.","EM algorithm; Generalized inverse Gaussian distribution; Heterogeneous degradation; Wiener process model","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-02-01","","","Statistics","","",""
"uuid:2e5f160d-2921-4f71-af4f-f11233872e20","http://resolver.tudelft.nl/uuid:2e5f160d-2921-4f71-af4f-f11233872e20","Prognostics of radiation power degradation lifetime for ultraviolet light-emitting diodes using stochastic data-driven models","Fan, J. (TU Delft Electronic Components, Technology and Materials; Fudan University; Changzhou Institute of Technology Research for Solid State Lighting); Jing, Zhou (Hohai University); Cao, Yixing (Fudan University); Ibrahim, Mesfin Seid (The Hong Kong Polytechnic University); Li, Min (Fudan University); Fan, Xuejun (Lamar University); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials)","","2021","With their advantages of high efficiency, long lifetime, compact size and being free of mercury, ultraviolet light-emitting diodes (UV LEDs) are widely applied in disinfection and purification, photolithography, curing and biomedical devices. However, it is challenging to assess the reliability of UV LEDs based on the traditional life test or even the accelerated life test. In this paper, radiation power degradation modeling is proposed to estimate the lifetime of UV LEDs under both constant stress and step stress degradation tests. Stochastic data-driven predictions with both Gamma process and Wiener process methods are implemented, and the degradation mechanisms occurring under different aging conditions are also analyzed. The results show that, compared to least squares regression in the IESNA TM-21 industry standard recommended by the Illuminating Engineering Society of North America (IESNA), the proposed stochastic data-driven methods can predict the lifetime with high accuracy and narrow confidence intervals, which confirms that they provide more reliable information than the IESNA TM-21 standard with greater robustness.","Degradation modeling; Gamma process; IESNA TM-21; Ultraviolet light-emitting diodes (UV LEDs); Wiener process","en","journal article","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:5c26164b-a92a-42b3-8467-025f4f2db0ac","http://resolver.tudelft.nl/uuid:5c26164b-a92a-42b3-8467-025f4f2db0ac","Enhancing the Separation Efficiency in Acetic Acid Manufacturing by Methanol Carbonylation","Dimian, Alexandre C. (Politehnica University of Bucharest); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering; The University of Manchester)","","2021","Acetic acid is an essential chemical used in the production of many chemical products. Using methanol as bio-building block, the acetic acid produced may be incorporated in bio-based products. This study deals with enhancing the efficiency of separations in a process using a homogeneous catalyst, which is the most applied today industrially. Several configurations for downstream processing by distillation are investigated using effective combinations of fully thermally coupled columns, namely, dividing-wall columns, alone or combined with heat pumps. The results indicate that the three-column direct sequence may be effectively replaced by a two-column sequence. Overall, the total annual cost is reduced by 34 %. The cost of compressor may be paid back in 1.5 years by the saved energy. Low energy intensity is achieved by tight integration with the reaction section.","Acetic acid manufacturing; Chemical process design; Energy efficiency; Process simulation; Sustainable chemical process","en","journal article","","","","","","Accepted Author Manuscript","","2022-07-27","","","ChemE/Product and Process Engineering","","",""
"uuid:c543b6bb-189b-4247-8272-283807ed81bf","http://resolver.tudelft.nl/uuid:c543b6bb-189b-4247-8272-283807ed81bf","Deconsolidation of thermoplastic prepreg tapes during rapid laser heating","Çelik, O. (TU Delft Aerospace Manufacturing Technologies); Choudhary, A. (TU Delft Aerospace Manufacturing Technologies); Peeters, D.M.J. (TU Delft Aerospace Manufacturing Technologies; TU Delft Aerospace Structures & Computational Mechanics); Teuwen, Julie J.E. (TU Delft Aerospace Manufacturing Technologies); Dransfeld, C.A. (TU Delft Aerospace Manufacturing Technologies)","","2021","In this study, the effect of rapid laser heating, which is typical during laser-assisted fiber placement (LAFP), on the micro- and meso- structure of the thermoplastic tape was investigated. Thermoplastic tapes were heated above the melting temperature with different heated lengths (30 and 80 mm and heating times (0.2 and 0.8 s) in a dedicated experimental setup. In-situ and ex-situ characterization techniques were used to observe the differences between the micro- and meso- structure of the tape before and after heating. The experiments resulted in significant changes in the tape structure, namely increased out-of-plane deformation, waviness, arc-length width, roughness, thickness and volumetric void content. This study shows for the first time that a unique deconsolidation behavior takes place during the heating phase of LAFP: the deconsolidation mechanisms are exacerbated by the non-uniform temperature at the tape surface, which is caused by roughness increase and waviness formation.","A. Polymer-matrix composites (PMCs); B. Microstructures; D. Process monitoring; E. Automated fiber placement (AFP)","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:1a4d7ea8-2c3b-468b-b4fc-10841657dd57","http://resolver.tudelft.nl/uuid:1a4d7ea8-2c3b-468b-b4fc-10841657dd57","Clonos: Consistent Causal Recovery for Highly-Available Streaming Dataflows","Fortunato Silvestre, P.M. (TU Delft Web Information Systems); Fragkoulis, M. (TU Delft Web Information Systems); Spinellis, D. (TU Delft Software Engineering); Katsifodimos, A (TU Delft Web Information Systems)","","2021","Stream processing lies in the backbone of modern businesses, being employed for mission critical applications such as real-time fraud detection, car-trip fare calculations, traffic management, and stock trading. Large-scale applications are executed by scale-out stream processing systems on thousands of long-lived operators, which are subject to failures. Recovering from failures fast and consistently are both top priorities, yet they are only partly satisfied by existing fault tolerance methods due to the strong assumptions these make. In particular, prior solutions fail to address consistency in the presence of nondeterminism, such as calls to external services, asynchronous timers and processing-time windows. This paper describes Clonos, a fault tolerance approach that achieves fast, local operator recovery with exactly-once guarantees and high availability by instantly switching to passive standby operators. Clonos enforces causally consistent recovery, including output deduplication, by tracking nondeterminism within the system through causal logging. To implement Clonos we re-engineered many of the internal subsystems of a state of the art stream processor. We evaluate Clonos' overhead and recovery on the Nexmark benchmark against Apache Flink. Clonos achieves instant recovery with negligible overhead and, unlike previous work, does not make assumptions on the deterministic nature of operators.","cloud computing; consistency; exactly-once; fault-tolerance; high-availability; stream processing","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Web Information Systems","","",""
"uuid:0c43c366-a1ff-48fd-bf61-df8dc88979e3","http://resolver.tudelft.nl/uuid:0c43c366-a1ff-48fd-bf61-df8dc88979e3","Investigations on Explainable Artificial Intelligence methods for the deep learning classification of fibre layup defect in the automated composite manufacturing","Meister, S. (TU Delft Structural Integrity & Composites; Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)); Wermes, Mahdieu (Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)); Stueve, J. (TU Delft Aerospace Manufacturing Technologies; Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)); Groves, R.M. (TU Delft Structural Integrity & Composites)","","2021","Automated fibre layup techniques are widely used in the aviation sector for the efficient production of composite components. However, the required manual inspection can take up to 50 % of the manufacturing time. The automated classification of fibre layup defects with Neural Networks potentially increases the inspection efficiency. However, the machine decision-making processes of such classifiers are difficult to verify. Hence, we present an approach for analysing the classification procedure of fibre layup defects. Therefore, we comprehensively evaluate 20 Explainable Artificial Intelligence methods from the literature. Accordingly, the techniques Smoothed Integrated Gradients, Guided Gradient Class Activation Mapping and DeepSHAP are applied to a Convolutional Neural Network classifier. These methods analyse the neural activations and robustness of a classifier for an unknown and manipulated input data. Our investigations show that especially Smoothed Integrated Gradients and DeepSHAP are well suited for the visualisation of such classifications. Additionally, maximum-sensitivity and infidelity calculations confirm this behaviour. In future, customers and developers could apply the presented methods for the certification of their inspection systems.","Automation; Defects; Non-destructive testing; Process monitoring","en","journal article","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:392e6400-871c-452c-9bb6-790140906414","http://resolver.tudelft.nl/uuid:392e6400-871c-452c-9bb6-790140906414","Subgeometric hypocoercivity for piecewise-deterministic markov process monte carlo methods","Andrieu, Christophe (University of Bristol); Dobson, P. (TU Delft Statistics); Wang, Andi Q. (University of Bristol)","","2021","We extend the hypocoercivity framework for piecewise-deterministic Markov process (PDMP) Monte Carlo established in [2] to heavy-tailed target distributions, which exhibit subgeometric rates of convergence to equilibrium. We make use of weak Poincaré inequalities, as developed in the work of [15], the ideas of which we adapt to the PDMPs of interest. On the way we report largely potential-independent approaches to bounding explicitly solutions of the Poisson equation of the Langevin diffusion and its first and second derivatives, required here to control various terms arising in the application of the hypocoercivity result.","Hypocoercivity; Markov chain Monte Carlo; Piecewise-deterministic Markov process; Subgeometric convergence","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:fcbe01a5-b299-4c13-8891-e4e8bc9c14a6","http://resolver.tudelft.nl/uuid:fcbe01a5-b299-4c13-8891-e4e8bc9c14a6","In-operando dynamic visualization of flow through porous preforms based on X-ray phase contrast imaging","Teixidó, Helena (Swiss Federal Institute of Technology); Caglar, Baris (TU Delft Aerospace Manufacturing Technologies; Swiss Federal Institute of Technology); Revol, Vincent (CSEM SA); Michaud, Véronique (Swiss Federal Institute of Technology)","","2021","Direct visualization is often sought to elucidate flow patterns and validate models to predict the filling kinetics during processes whereby a liquid resin infiltrates a textile porous preform. Here, X-ray phase contrast interferometry is evaluated to image in-operando constant flow rate impregnation experiments of a model fluid into glass, carbon and flax fabrics and a 3D printed structure. A methodology is presented to build the dynamic saturation curve based on image analysis of the dark field images, in which the pixel intensity values are transformed into saturation level versus position and time. The results prove the suitability of this technique to observe the progressive saturation averaged through the thickness of translucent and non-translucent preforms. The porosity network formed by the layers of fabric, the refractive properties of the material, the fabric geometry and its position relative to the X-ray setup are reported to be the main parameters affecting image contrast.","A. Fabric/Textiles; D. Process Monitoring; E. Liquid Composite Molding; E. Resin Flow","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:c614969f-39f0-41b5-9266-90432f761924","http://resolver.tudelft.nl/uuid:c614969f-39f0-41b5-9266-90432f761924","Stochastic modeling of hydroclimatic processes using vine copulas","Pouliasis, George (Student TU Delft); Torres Alves, G.A. (TU Delft Hydraulic Structures and Flood Risk); Morales Napoles, O. (TU Delft Hydraulic Structures and Flood Risk)","","2021","The generation of synthetic time series is important in contemporary water sciences for their wide applicability and ability to model environmental uncertainty. Hydroclimatic variables often exhibit highly skewed distributions, intermittency (that is, alternating dry and wet intervals), and spatial and temporal dependencies that pose a particular challenge to their study. Vine copula models offer an appealing approach to generate synthetic time series because of their ability to preserve any marginal distribution while modeling a variety of probabilistic dependence structures. In this work, we focus on the stochastic modeling of hydroclimatic processes using vine copula models. We provide an approach to model intermittency by coupling Markov chains with vine copula models. Our approach preserves first-order auto-and cross-dependencies (correlation). Moreover, we present a novel framework that is able to model multiple processes simultaneously. This method is based on the coupling of temporal and spatial dependence models through repetitive sampling. The result is a parsimonious and flexible method that can adequately account for temporal and spatial dependencies. Our method is illustrated within the context of a recent reliability assessment of a historical hydraulic structure in central Mexico. Our results show that by ignoring important characteristics of probabilistic dependence that are well captured by our approach, the reliability of the structure could be severely underestimated.","Copula; Hydroclimatic processes; Intermittent behavior; Multivariate simulation; Stochastic simulation; Time series; Vine copula","en","journal article","","","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:4ed7ec56-1d46-4144-ae47-3ebcceebcaf4","http://resolver.tudelft.nl/uuid:4ed7ec56-1d46-4144-ae47-3ebcceebcaf4","The influence of inter-laminar thermal contact resistance on the cooling of material during laser assisted fiber placement","Çelik, O. (TU Delft Aerospace Manufacturing Technologies); Hosseini, S.M.A. (University of Twente); Baran, Ismet (University of Twente); Grouve, Wouter J.B. (University of Twente); Akkerman, Remko (University of Twente); Peeters, D.M.J. (TU Delft Aerospace Manufacturing Technologies; TU Delft Aerospace Structures & Computational Mechanics); Teuwen, Julie J.E. (TU Delft Aerospace Manufacturing Technologies); Dransfeld, C.A. (TU Delft Aerospace Manufacturing Technologies)","","2021","The effect of thermal contact resistance (TCR) correlated to the degree of intimate contact (DIC) between the incoming tape and the substrate on the temperature history during laser-assisted fiber placement (LAFP) was investigated. A novel experimental methodology was designed to understand the effect with a non-contact method which did not influence the local consolidation quality. To assess the influence of TCR numerically, a three-dimensional optical-thermal model was developed. Experimental results indicated that, for the same tape temperature near the nip point, an increase in the compaction force resulted in a decrease in the temperature at the roller exit and the following cooling phase, in correlation with an increase in the final DIC. Also, the effect of the laser power on the final DIC was less pronounced than the compaction force. In the thermal model, when TCR at the tape-substrate interface was not considered, the temperature predictions underestimated the experimental measurements.","A. Polymer-matrix composites (PMCs); C. Process modeling; D. Microstructural analysis; E. Automated fiber placement (AFP)","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:1066fd2b-0c36-41c5-922a-59ab6bff78da","http://resolver.tudelft.nl/uuid:1066fd2b-0c36-41c5-922a-59ab6bff78da","Limit theorems for cloning algorithms","Angeli, Letizia (University of Warwick; Heriot-Watt University); Grosskinsky, S.W. (TU Delft Applied Probability; University of Warwick); Johansen, Adam M. (University of Warwick)","","2021","Large deviations for additive path functionals of stochastic processes have attracted significant research interest, in particular in the context of stochastic particle systems and statistical physics. Efficient numerical ‘cloning’ algorithms have been developed to estimate the scaled cumulant generating function, based on importance sampling via cloning of rare event trajectories. So far, attempts to study the convergence properties of these algorithms in continuous time have led to only partial results for particular cases. Adapting previous results from the literature of particle filters and sequential Monte Carlo methods, we establish a first comprehensive and fully rigorous approach to bound systematic and random errors of cloning algorithms in continuous time. To this end we develop a method to compare different algorithms for particular classes of observables, based on the martingale characterization of stochastic processes. Our results apply to a large class of jump processes on compact state space, and do not involve any time discretization in contrast to previous approaches. This provides a robust and rigorous framework that can also be used to evaluate and improve the efficiency of algorithms.","Cloning algorithm; Dynamic large deviations; Feynman–Kac formulae; Interacting particle systems; Jump processes; L convergence","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:33a8dceb-9202-4d1a-976f-cc47ff364fbe","http://resolver.tudelft.nl/uuid:33a8dceb-9202-4d1a-976f-cc47ff364fbe","Quantitative evaluation of polarimetric estimates from scanning weather radars using a vertically pointing micro rain radar","Reinoso Rondinel, R. (TU Delft Atmospheric Remote Sensing); Schleiss, M.A. (TU Delft Atmospheric Remote Sensing)","","2021","Conventionally, Micro Rain Radars (MRRs) have been used as a tool to calibrate reflectivity from weather radars, estimate the relation between rainfall rate and reflectivity, and study microphysical processes in precipitation. However, limited attention has been given to the reliability of the retrieved drop size distributions (DSDs) from MRRs. This study sheds more light on this aspect by examining the sensitivity of retrieved DSDs to the assumptions made to map Doppler spectra into size distributions, and investigates the capability of an MRR to assess polarimetric observations from operational weather radars. For that, an MRR was installed near the Cabauw observatory in the Netherlands, between the International Research Center for Telecommunications and Radar (IRCTR) Drizzle Radar (IDRA) X-band radar and the Herwijnen operational C-band radar. The measurements of the MRR from November 2018 to February 2019 were used to retrieve DSDs and simulate horizontal reflectivity Ze, differential reflectivity ZDR, and specific differential phase KDP in rain. Attention is given to the impact of aliased spectra and right-hand-side truncation on the simulation of polarimetric variables. From a quantitative assessment, the correlations of Ze and ZDR between the MRR and Herwijnen radar were 0.93 and 0.70, respectively, while those between the MRR and IDRA were 0.91 and 0.69. However, Ze and ZDR from the Herwijnen radar showed slight biases of 1.07 and 0.25 dB. For IDRA, the corresponding biases were 2.67 and-0.93 dB. Our results show that MRR measurements are advantageous to inspect the calibration of scanning radars and validate polarimetric estimates in rain, provided that the DSDs are correctly retrieved and controlled for quality assurance.","Data processing; Data quality control; Measurements; Radars/Radar observations; Weather radar signal processing","en","journal article","","","","","","","","2021-09-01","","","Atmospheric Remote Sensing","","",""
"uuid:e873ee85-4cc0-49fa-9447-efb394cdc14d","http://resolver.tudelft.nl/uuid:e873ee85-4cc0-49fa-9447-efb394cdc14d","White paper on high-throughput process development for integrated continuous biomanufacturing","Neves Sao Pedro, M. (TU Delft BT/Bioprocess Engineering); Picanço Castanheira Da Silva, T. (TU Delft BT/Bioprocess Engineering); Patil, Rohan (Global CMC Development, Sanofi); Ottens, M. (TU Delft BT/Bioprocess Engineering)","","2021","Continuous manufacturing is an indicator of a maturing industry, as can be seen by the example of the petrochemical industry. Patent expiry promotes a price competition between manufacturing companies, and more efficient and cheaper processes are needed to achieve lower production costs. Over the last decade, continuous biomanufacturing has had significant breakthroughs, with regulatory agencies encouraging the industry to implement this processing mode. Process development is resource and time consuming and, although it is increasingly becoming less expensive and faster through high-throughput process development (HTPD) implementation, reliable HTPD technology for integrated and continuous biomanufacturing is still lacking and is considered to be an emerging field. Therefore, this paper aims to illustrate the major gaps in HTPD and to discuss the major needs and possible solutions to achieve an end-to-end Integrated Continuous Biomanufacturing, as discussed in the context of the 2019 Integrated Continuous Biomanufacturing conference. The current HTPD state-of-the-art for several unit operations is discussed, as well as the emerging technologies which will expedite a shift to continuous biomanufacturing.","high-throughput process development; integrated continuous biomanufacturing; microfluidics; modeling; process analytical technology","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:a688376d-2c9b-4ed6-bc39-9bc3f155d01d","http://resolver.tudelft.nl/uuid:a688376d-2c9b-4ed6-bc39-9bc3f155d01d","Light scattering by gold nanoparticles cured in optical adhesive at optical fibre interfaces","Wang, X. (TU Delft Structural Integrity & Composites); Benedictus, R. (TU Delft Structural Integrity & Composites); Groves, R.M. (TU Delft Structural Integrity & Composites)","Lehmann, Peter (editor); Osten, Wolfgang (editor); Goncalves, Armando Albertazzi (editor)","2021","This study forms a part of the research in using nanoparticles (NPs) to increase the intensity of light scattering signal in the optical fibres. Increasing the intensity of the backscattered light signal in the optical fibres shows the potential to increase the signal-to-noise ratio in order to improve the sensitivity of the backscatter reflectometry. Doping NPs into the optical fibres can greatly increase the scattered light. However, it is not easy to manufacture NP-doped optical fibres to test different designs. To overcome this problem, in our former work we used the method of dropping refractive index matching liquid containing gold NPs at the optical fibres end tips to investigate the intensity change of the scattered light from the interfaces. In this paper, some new initial experimental results for the scattered light between the optical fibre end tips are shown. Gold NPs have been mixed into the optical adhesive (Norland) and is then dropped and cured at the optical fibre end tips. A backscatter reflectometer (LUNA ODiSI-B) was used in the experiment to measure the intensity of scattered light distribution between the optical fibre end tips. We investigated 4 cases of light scattering between the optical fibre end tips: (i) the backscattered light intensity distribution in the case of the air gap between the optical fibre end tips; (ii) the backscattered light intensity distribution with optical adhesive between the optical fibre end tips; (iii) the backscattered light intensity distribution with optical adhesive containing gold NPs (gold nanopowder (<100 nm), Sigma Aldrich) between the optical fibre end tips before curing process and (iv) the backscattered light intensity distribution with optical adhesive containing gold NPs between the optical fibre end tips after the curing process. Our initial findings are that the scattered light by gold NPs at the optical fibre interfaces can be detected by the backscatter reflectometer. By obtaining the differential signal between the distributed light scattering by cured optical adhesive containing gold NPs and only optical adhesive between the optical fibre end tips, the light scattered by the gold NPs has be determined.","Curing process; Gold nanoparticle; Light scattering; Optical adhesive","en","conference paper","SPIE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-04-12","","","Structural Integrity & Composites","","",""
"uuid:a8a3c22a-57c9-4d0d-838a-1d89e1fe272c","http://resolver.tudelft.nl/uuid:a8a3c22a-57c9-4d0d-838a-1d89e1fe272c","Review of Manufacturing Process Defects and Their Effects on Memristive Devices","Poehls, L. M.Bolzani (Rheinisch-Westfälische Technische Hochschule; Pontifical Catholic University of Rio Grande do Sul); Fieback, M. (TU Delft Computer Engineering); Hoffmann-Eifert, S. (Forschungszentrum Jülich GmbH); Copetti, T. (Rheinisch-Westfälische Technische Hochschule); Brum, E. (Pontifical Catholic University of Rio Grande do Sul); Menzel, S. (Forschungszentrum Jülich GmbH); Hamdioui, S. (TU Delft Quantum & Computer Engineering); Gemmeke, T. (Rheinisch-Westfälische Technische Hochschule)","","2021","Complementary Metal Oxide Semiconductor (CMOS) technology has been scaled down over the last forty years making possible the design of high-performance applications, following the predictions made by Gordon Moore and Robert H. Dennard in the 1970s. However, there is a growing concern that device scaling, while maintaining cost-effective production, will become infeasible below a certain feature size. In parallel, emerging applications including Internet-of-Things (IoT) and big data applications present high demands in terms of storage and computing capability, combined with challenging constraints in terms of size, power consumption and response latency. In this scenario, memristive devices have become promising candidates to complement the CMOS technology due to their CMOS manufacturing process compatibility, great scalability and high density, zero standby power consumption and their capacity to implement high density memories as well as new computing paradigms. Despite these advantages, memristive devices are also susceptible to manufacturing defects that may cause unique faulty behaviors that are not seen in CMOS, increasing significantly the complexity of test procedures. This paper provides a review about the manufacturing process of memristives devices, focusing on Valence Change Mechanism (VCM)-based memristive devices, and a comparative analysis of the CMOS and memristive device manufacturing processes. Moreover, this paper identifies possible manufacturing failure mechanisms that may affect these novel devices, completing the list of the already known mechanisms, and provides a discussion about possible faulty behaviors. Note that the identification of these mechanisms provides insights regarding the possible memristive devices’ defective behaviors, enabling to derive more accurate fault models and consequently, more suitable test procedures.","CMOS; Defects; Fault models; Manufacturing process; Memristive devices","en","journal article","","","","","","","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:a27a001a-838f-49fa-824d-100bc0549e26","http://resolver.tudelft.nl/uuid:a27a001a-838f-49fa-824d-100bc0549e26","Mesoscale process modeling of a thick pultruded composite with variability in fiber volume fraction","Yuksel, O. (TU Delft Aerospace Manufacturing Technologies; University of Twente); Sandberg, Michael (Technical University of Denmark; Aarhus University); Hattel, Jesper H. (Technical University of Denmark); Akkerman, Remko (University of Twente); Baran, Ismet (University of Twente)","","2021","Pultruded fiber-reinforced polymer composites are susceptible to microstructural nonuni-formity such as variability in fiber volume fraction (Vf ), which can have a profound effect on process-induced residual stress. Until now, this effect of non-uniform Vf distribution has been hardly addressed in the process models. In the present study, we characterized the Vf distribution and accompanying nonuniformity in a unidirectional fiber-reinforced pultruded profile using optical light microscopy. The identified nonuniformity in Vf was subsequently implemented in a mesoscale thermal–chemical–mechanical process model, developed explicitly for the pultrusion process. In our process model, the constitutive material behavior was defined locally with respect to the corresponding fiber volume fraction value in different-sized representative volume elements. The effect of nonuniformity on the temperature and cure degree evolution, and residual stress was analyzed in depth. The results show that the nonuniformity in fiber volume fraction across the cross-section increased the absolute magnitude of the predicted residual stress, leading to a more scattered residual stress distribution. The observed Vf gradient promotes tensile residual stress at the core and compressive residual stress at the outer regions. Consequently, it is concluded that it is essential to take the effects of nonuniformity in fiber distribution into account for residual stress estimations, and the proposed numerical framework was found to be an efficient tool to study this aspect.","Fiber volume fraction; Mesoscale; Nonuniformity; Process modeling; Pultrusion; Residual stress","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:c07047a5-22e4-4aeb-9212-55468cef7b4a","http://resolver.tudelft.nl/uuid:c07047a5-22e4-4aeb-9212-55468cef7b4a","WHOSe Heritage: Classification of UNESCO World Heritage Statements of ""outstanding Universal Value"" with Soft Labels","Bai, N. (TU Delft Heritage & Values); Luo, Renqian (University of Science and Technology of China); Nourian, Pirouz (TU Delft Design Informatics); Pereira Roders, A. (TU Delft Heritage & Values)","Moens, Marie-Francine (editor); Huang, Xuanjing (editor); Specia, Lucia (editor); Yih, Scott Wen-Tau (editor)","2021","The UNESCO World Heritage List (WHL) includes the exceptionally valuable cultural and natural heritage to be preserved for mankind. Evaluating and justifying the Outstanding Universal Value (OUV) is essential for each site inscribed in the WHL, and yet a complex task, even for experts, since the selection criteria of OUV are not mutually exclusive. Furthermore, manual annotation of heritage values and attributes from multi-source textual data, which is currently dominant in heritage studies, is knowledge-demanding and timeconsuming, impeding systematic analysis of such authoritative documents in terms of their implications on heritage management. This study applies state-of-the-art NLP models to build a classifier on a new dataset containing Statements of OUV, seeking an explainable and scalable automation tool to facilitate the nomination, evaluation, research, and monitoring processes of World Heritage sites. Label smoothing is innovatively adapted to improve the model performance by adding prior interclass relationship knowledge to generate soft labels. The study shows that the best models fine-tuned from BERT and ULMFiT can reach 94.3% top-3 accuracy. A human study with expert evaluation on the model prediction shows that the models are sufficiently generalizable. The study is promising to be further developed and applied in heritage research and practice.","Heritage values; Natural Language Processing; Outstanding Universal Value; Classification; Deep Learning; UNESCO World Heritage","en","conference paper","Association for Computational Linguistics (ACL)","","","","","","","","","","Heritage & Values","","",""
"uuid:1dd37a8e-0e82-4dde-a1a5-17f7a0c7077b","http://resolver.tudelft.nl/uuid:1dd37a8e-0e82-4dde-a1a5-17f7a0c7077b","Segmentation of traffic signs from poles with mathematical morphology applied to point clouds","Balado Frías, J. (TU Delft GIS Technologie; Universidade de Vigo, Vigo); Soilán, M. (University of Salamanca); Díaz-Vilarino, L. (TU Delft GIS Technologie; Universidade de Vigo, Vigo); van Oosterom, P.J.M. (TU Delft GIS Technologie)","","2021","Traffic signs are one of the most relevant road assets for driving, as the safety of drivers depends to a great extent on their correct location. In this paper two methods are compared for the segmentation of the sign and the pole supporting it. Both methods are based on the morphological opening to identify the sign points, the first one directly employs the mathematical morphology directly applied to point clouds and the second one through point cloud rasterization into images. The comparison was conducted on twenty real traffic signs acquired with Mobile Laser Scanning obtaining point clouds from environments with signposts, traffic lights and lampposts. The results showed a correct segmentation of the signs, obtaining a F-score of 0.81 by the point-based method and a 0.75 by 2D image method. In particular, the point-based mathematical morphology proved to be more accurate in the segmentation of traffic sings installed on traffic lights and lampposts, avoiding over detection shown by the 2D image method.","Image processing; Mathematical morphology; Mobile Laser Scanning; Morphological opening; Topographic LiDAR; Traffic signs","en","journal article","","","","","","","","","","","GIS Technologie","","",""
"uuid:505d445b-6f4b-4110-9f96-0f9c96f7109a","http://resolver.tudelft.nl/uuid:505d445b-6f4b-4110-9f96-0f9c96f7109a","Micro-scale Realization of Compliant Mechanisms: Manufacturing Processes and Constituent Materials—A Review","Wang, M. (TU Delft Mechatronic Systems Design; Jiangsu University); Ge, Daohan (Jiangsu University); Zhang, Liqiang (Jiangsu University); Herder, J.L. (TU Delft Precision and Microsystems Engineering; TU Delft Mechatronic Systems Design)","","2021","Compliant micromechanisms (CMMs) acquire mobility from the deflection of elastic members and have been proven to be robust by millions of silicon MEMS devices. However, the limited deflection of silicon impedes the realization of more sophisticated CMMs, which often require larger deflections. Recently, some novel manufacturing processes have emerged but are not well known by the community. In this paper, the realization of CMMs is reviewed, aiming to provide help to mechanical designers to quickly find the proper realization method for their CMM designs. To this end, the literature surveyed was classified and statistically analyzed, and representative processes were summarized individually to reflect the state of the art of CMM manufacturing. Furthermore, the features of each process were collected into tables to facilitate the reference of readers, and the guidelines for process selection were discussed. The review results indicate that, even though the silicon process remains dominant, great progress has been made in the development of polymer-related and composite-related processes, such as micromolding, SU-8 process, laser ablation, 3D printing, and the CNT frameworking. These processes result in constituent materials with a lower Young’s modulus and larger maximum allowable strain than silicon, and therefore allow larger deflection. The geometrical capabilities (e.g., aspect ratio) of the realization methods should also be considered, because different types of CMMs have different requirements. We conclude that the SU-8 process, 3D printing, and carbon nanotube frameworking will play more important roles in the future owing to their excellent comprehensive capabilities.","Compliant micromechanism; Constituent material; Manufacturing process","en","review","","","","","","","","","","Precision and Microsystems Engineering","Mechatronic Systems Design","","",""
"uuid:c4715cd7-79cf-448d-9035-13f947789bc6","http://resolver.tudelft.nl/uuid:c4715cd7-79cf-448d-9035-13f947789bc6","Editorial: Perspectives of Chemicals Synthesis as a Green Alternative to Fossil Fuels","Puigjaner, Luis (Universitat Politecnica de Catalunya); Pérez-Fortes, Mar (TU Delft Energie and Industrie); Somoza-Tornos, Ana (Universitat Politecnica de Catalunya; University of Colorado); Espuña, Antonio (Universitat Politecnica de Catalunya)","","2021","","circular economy; economic competitiveness; green fuels; LCA; low carbon processes; process systems engineering; sustainable development","en","contribution to periodical","","","","","","","","","","","Energie and Industrie","","",""
"uuid:405c4965-7150-4b09-a8b8-50119b6209c2","http://resolver.tudelft.nl/uuid:405c4965-7150-4b09-a8b8-50119b6209c2","Enhanced process for energy efficient extraction of 1,3-butadiene from a crude C4 cut","Mantingh, J.S. (TU Delft ChemE/Delft Ingenious Design); Kiss, A.A. (TU Delft ChemE/Product and Process Engineering; The University of Manchester)","","2021","1,3-butadiene is an essential platform chemical for producing rubberlike polymers, which is extracted from C4 hydrocarbons that are produced through steam cracking. The current state-of-the-art technology is the BASF process that uses thermally coupled extractive distillation (ED) followed by two distillation columns. However, the process requires high temperature heat input, thus high cost hot utility and reduces the possibility for process heat integration. To solve these issues, this study proposes novel enhancements: the ED part is modified with intermediate heating and the classic columns are replaced with a heat pump assisted dividing wall column (DWC). Rigorous simulations were carried out in Aspen Plus for a typical ED process. The intermediate reboiler system is designed to maximize the possible process heat recovery. The results show that the heat pump assisted DWC is able to reduce the energy intensity of the classic distillation section of the BASF process by 54.8% and reduces the total annual costs by 29.9%. Additionally, the intermediate reboiler reduces the energy intensity of the ED section by 8.3% while also reducing the CAPEX of the system due to the need for a smaller recycle compressor. In combination, these modifications are able to achieve up to a 21% reduction in the energy intensity of the overall process, with a payback time of 1 year.","Energy savings; Fluid separation; Process design; Process intensification; Vapor recompression","en","journal article","","","","","","","","","","","ChemE/Delft Ingenious Design","","",""
"uuid:24e76949-5c40-4d7e-9175-af610a4c1afc","http://resolver.tudelft.nl/uuid:24e76949-5c40-4d7e-9175-af610a4c1afc","Distinguishing level-1 phylogenetic networks on the basis of data generated by Markov processes","Gross, Elizabeth (University of Hawaii at Manoa); van Iersel, L.J.J. (TU Delft Discrete Mathematics and Optimization); Janssen, R. (TU Delft Discrete Mathematics and Optimization); Jones, M.E.L. (TU Delft Discrete Mathematics and Optimization); Long, Colby (The College of Wooster); Murakami, Yukihiro (TU Delft Discrete Mathematics and Optimization)","","2021","Phylogenetic networks can represent evolutionary events that cannot be described by phylogenetic trees. These networks are able to incorporate reticulate evolutionary events such as hybridization, introgression, and lateral gene transfer. Recently, network-based Markov models of DNA sequence evolution have been introduced along with model-based methods for reconstructing phylogenetic networks. For these methods to be consistent, the network parameter needs to be identifiable from data generated under the model. Here, we show that the semi-directed network parameter of a triangle-free, level-1 network model with any fixed number of reticulation vertices is generically identifiable under the Jukes–Cantor, Kimura 2-parameter, or Kimura 3-parameter constraints.","Identifiability; Markov processes; Phylogenetic networks; Reticulation","en","journal article","","","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:4ef6ea56-8e59-4bfe-a351-4d8327176279","http://resolver.tudelft.nl/uuid:4ef6ea56-8e59-4bfe-a351-4d8327176279","Information sharing to mitigate delays in port: the case of the Port of Rotterdam","Nikghadam, S. (TU Delft Transport and Logistics); Molkenboer, Kim F. (Student TU Delft); Tavasszy, Lorant (TU Delft Transport and Planning; TU Delft Transport and Logistics); Rezaei, J. (TU Delft Transport and Logistics)","","2021","Reliability of service times has long been a concern of many ports around the world. This paper presents an approach to mitigate delays in service times through improved information sharing in ports. The approach is based on a mapping of information sharing links and their association to the root causes of frequently occurring delays. We identify the kind of information which is critical in mitigating delays. Critical information links are then re-ordered to create an information sharing arrangement between the actors, which further condenses and simplifies the required information sharing actions. We apply the proposed approach to the Port of Rotterdam. Quantitative data of 28,000 port calls is complemented by qualitative data collected through direct observations and expert interviews with port actors, including the pilot organization, a tugboat company, the boatmen organization, the harbour master, a terminal and a vessel agent. Besides the suggested arrangement for information sharing, the case reveals the critical position of pilots, a vulnerable position of tugboat companies and the minimal contribution made by the terminal towards information sharing. The increased pressure on ports by ever larger vessels seems to bear its fair share for delays and bottlenecks in the smooth execution of port operations.","Delay mitigation; Information sharing; Nautical chain; Port call process","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:7a14215f-28ab-4c07-8ea2-6cdf15d5aebc","http://resolver.tudelft.nl/uuid:7a14215f-28ab-4c07-8ea2-6cdf15d5aebc","Recognizing Perceived Interdependence in Face-to-Face Negotiations through Multimodal Analysis of Nonverbal Behavior","Dudzik, B.J.W. (TU Delft Pattern Recognition and Bioinformatics); Columbus, Simon (University of Copenhagen); Matej Hrkalovic, T. (Vrije Universiteit Amsterdam); Balliet, Daniel (Vrije Universiteit Amsterdam); Hung, H.S. (TU Delft Pattern Recognition and Bioinformatics)","","2021","Enabling computer-based applications to display intelligent behavior in complex social settings requires them to relate to important aspects of how humans experience and understand such situations. One crucial driver of peoples' social behavior during an interaction is the interdependence they perceive, i.e., how the outcome of an interaction is determined by their own and others' actions. According to psychological studies, both the nonverbal behavior displayed by Motivated by this, we present a series of experiments to automatically recognize interdependence perceptions in dyadic face-to-face negotiations using these sources. Concretely, our approach draws on a combination of features describing individuals' Facial, Upper Body, and Vocal Behavior with state-of-the-art algorithms for multivariate time series classification. Our findings demonstrate that differences in some types of interdependence perceptions can be detected through the automatic analysis of nonverbal behaviors. We discuss implications for developing socially intelligent systems and opportunities for future research.","Situation Perception; Social Signal Processing; User-Modeling","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:b77c9ac1-8cda-4bb2-b707-29d1359b6ad3","http://resolver.tudelft.nl/uuid:b77c9ac1-8cda-4bb2-b707-29d1359b6ad3","Process simulation development of a clean waste-to-energy conversion power plant: Thermodynamic and environmental assessment","Kuo, P.C. (TU Delft Energy Technology); Illathukandy, Biju (Indian Institute of Technology Delhi; Government Engineering College); Kung, Chi Hsiu (Tayih Corporation); Chang, Jo Shu (National Cheng Kung University; Tunghai University); Wu, Wei (National Cheng Kung University)","","2021","Waste-to-energy (WTE) conversion technologies for generating renewable energy and solving the environmental problems have an important role in the development of sustainable circular economy. This paper presents a novel high-efficiency WTE power plant using refuse-derived fuel (RDF) as feedstock by integrating torrefaction (T) pretreatment with plasma gasifier (PG), solid oxide fuel cell (SOFC), and combined heat and power (CHP) system. The combined impacts of torrefaction conditions (i.e. temperature and residence time) and steam-to-fuel (S/F) ratio on the energy and environmental performances of the proposed T-PG-SOFC-CHP power plant without CO2 capture (System I) is first evaluated. Results show that torrefaction of RDF prior to plasma gasification provides better syngas quality and therefore the system electrical efficiency (SEE) and CHP efficiency (CHPE) of System I can be markedly boosted compared to that of untreated RDF. However, the integration of torrefaction unit shows a negative effect on the energy return on investment (EROI) due to high energy demands for torrefaction and plasma gasification. Overall, the values of CHPE of System I range from 47.25% to 55.39% when the torrefaction temperatures of 200 and 250 °C are adopted. In contrast, the torrefaction of RDF at 300 °C is not a recommended condition for operation in the T-PG-SOFC-CHP power plant because of noticeably negative energy and environmental impacts. Moreover, to prevent the risk of carbon deposition on the SOFC anode, a recirculation ratio (RR) of the anode off-gas of 30% is required. Finally, the introduction of oxy-fuel combustion technology into the T-PG-SOFC-CHP system for CO2 capture (System II) allows to achieve a zero direct CO2 emission WTE power plant. However, this results in an energy penalty of about 5.40–6.77% associated with the CO2 capture and compression process.","CO capture; Energy and environmental analysis; EROI; Gasification; Process integration; Waste to energy conversion","en","journal article","","","","","","","","","","","Energy Technology","","",""
"uuid:d18a88e4-52b9-4eac-bb56-cc02db4e9a78","http://resolver.tudelft.nl/uuid:d18a88e4-52b9-4eac-bb56-cc02db4e9a78","Predicting major hazard accidents in the process industry based on organizational factors: A practical, qualitative approach","Schmitz, P.J.H. (TU Delft Safety and Security Science; OCI-Nitrogen); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); Swuste, P.H.J.J. (TU Delft Safety and Security Science); van Nunen, K.L.L. (TU Delft Safety and Security Science; Universiteit Antwerpen)","","2021","OCI Nitrogen seeks to gain knowledge of (leading) indicators regarding the process safety performance of their ammonia production process. The current sub-study raises the question whether major hazard accidents in the ammonia production process can be predicted from organizational factors, also called management delivery systems. This paper links organizational factors to accident processes and their barrier systems, using the bowtie metaphor. It is shown that organizational factors indirectly impact accident processes as they strongly influence the quality or trustworthiness of the barrier systems. By putting the right focus on organizational factors during audits or reviews, major accident processes get the attention they deserve, and the necessary actions are taken at the right management level. Qualitative and quantitative monitoring of organizational factors can provide a picture of their operation and efficiency. Using an example on retrospective data it is demonstrated that information from organizational factors could have stopped the development of the near-accident prematurely. However, organizational factors should first be qualitatively assessed before they are quantitatively monitored. A quantitative assessment has been worked out for one of the management delivery systems so to provide an example of management indicators. Determining these (management) indicators from threshold values is an intricate matter due to the complicated influence of organizational factors on accident processes, and requires more follow-up research.","Ammonia; Delivery systems; Indicator; Organizational factors; Process safety; Safety management system","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:5cf1cabc-edcc-4430-acc9-8dd77a0d023f","http://resolver.tudelft.nl/uuid:5cf1cabc-edcc-4430-acc9-8dd77a0d023f","Determining a realistic ranking of the most dangerous process equipment of the ammonia production process: A practical approach","Schmitz, P.J.H. (TU Delft Safety and Security Science; OCI-Nitrogen); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); Swuste, P.H.J.J. (TU Delft Safety and Security Science)","","2021","OCI Nitrogen seeks to gain knowledge of (leading) indicators regarding the process safety performance of their ammonia production process. The current research determines the most dangerous process equipment by calculating their effects resulting from a loss of containment using DNV GL's Phast™ dispersion model. In this paper, flammable and toxic effects from a release from the main equipment of an ammonia plant have been calculated. Such an encompassing approach, which can be carried out for an entire plant, is innovative and has never been conducted before. By using this model, it has been demonstrated that the effects arising from an event of failure are the largest in process equipment containing pressurized synthesis gas and ‘warm’ liquid ammonia, meaning the ammonia buffer tanks, ammonia product pumps, and the ammonia separator. Most importantly, this document substantiates that it is possible to rank the most hazardous process equipment of the ammonia production process based on an adverse impact on humans using the calculated effect distance as a starting point for a chance of death of at least 95%. The results from the effect calculations can be used for risk mapping of an entire chemical plant or be employed and applied in a layer of protection analysis (LOPA) to establish risk mitigation measures.","Ammonia; Bowtie; Effect; Indicator; Phast™; Process safety","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:b56b5cda-5539-4b86-9392-7018e9419424","http://resolver.tudelft.nl/uuid:b56b5cda-5539-4b86-9392-7018e9419424","Social Signals and Multimedia: Past, Present, Future","Hung, H.S. (TU Delft Pattern Recognition and Bioinformatics); Gurrin, Cathal (Dublin City University); Larson, M.A. (Radboud Universiteit Nijmegen); Gunes, Hatice (University of Cambridge); Ringeval, Fabien (Université Grenoble Alpes); Andre, Elisabeth (Universität Augsburg); Morency, Louis-Philippe (Carnegie Mellon University)","","2021","The rising popularity of Artificial Intelligence (AI) has brought considerable public interest as well faster and more direct transfer of research ideas into practice. One of the aspects of AI that still trails behind considerably is the role of machines in interpreting, enhancing, modeling, generating, and influencing social behavior. Such behavior is captured as social signals, usually by sensors recording multiple modalities, making it classic multimedia data. Such behavior can also be generated by an AI system when interacting with humans. Using AI techniques in combination with multimedia data can be used to pursue multiple goals, two of which are high-lighted here. First, supporting people during social interactions and helping them to fulfil their social needs either actively or passively.Second, improving our understanding of how people collaborate, build relationships, and process self identity. Despite the rise of fields such as Social Signal Processing, a similar panel organised at ACM Multimedia 2014, and an area on social and emotional signal sat the ACM MM since 2014, we argue that we have yet to truly fulfil the potential of the combining social signals and multimedia. This panel asks where we have come far enough and what remaining challenges there are in light of recent global events.","artificial intelligence; human social behavior; multi-modal machine learning; multimedia; social signal processing","en","conference paper","Association for Computing Machinery (ACM)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-04-08","","","Pattern Recognition and Bioinformatics","","",""
"uuid:8b90ccb4-2d61-4a06-8f98-5c20f54e1985","http://resolver.tudelft.nl/uuid:8b90ccb4-2d61-4a06-8f98-5c20f54e1985","Quantitative resilience assessment of chemical process systems using functional resonance analysis method and Dynamic Bayesian network","Zinetullina, Altyngul (Nazarbayev University); Yang, M. (TU Delft Safety and Security Science; Nazarbayev University); Khakzad, Nima (Toronto Metropolitan University); Golman, Boris (Nazarbayev University); Li, Xinhong (Xi'an University of Architecture and Technology)","","2021","The emergent hazards of chemical process systems cannot be wholly identified and are highly uncertain due to the complicated technical-human-organizational interactions. Under uncertain and unpredictable circumstances, resilience becomes an essential property of a chemical process system that helps it better adapt to disruptions and restore from surprising damages. The resilience assessment needs to be enhanced to identify the accident's root causes on the level of technical-human-organizational interactions, and development of the specific resilience attributes to withstand or recover from the disruptions. The outcomes of resilience assessment are valuable to identify potential design or operational improvements to ensure complex process system functionality and safety. The current study integrates the Functional Resonance Analysis Method and dynamic Bayesian Network for quantitative resilience assessment. The method is demonstrated through a two-phase separator of an acid gas sweetening unit. Aspen Hysys simulator is applied to estimate the failure probabilities needed in the resilience assessment model. The study provides a useful tool for rigorous quantitative resilience analysis of complex process systems on the level of technical-human-organizational interactions.","Chemical process systems; Dynamic Bayesian network; FRAM; Resilience assessment","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:c4e3954d-6616-44a3-814f-ff4fae329dc3","http://resolver.tudelft.nl/uuid:c4e3954d-6616-44a3-814f-ff4fae329dc3","Roadmap on signal processing for next generation measurement systems","Iakovidis, Dimitris K. (University of Thessaly); Ooi, Melanie (University of Waikato, Hamilton); Kuang, Ye Chow (University of Waikato, Hamilton); Demidenko, Serge (University of Waikato, Hamilton; Massey University); Shestakov, Alexandr (Sunway University); Sinitsin, Vladimir (Sunway University); Henry, Manus (Sunway University; Coventry University; University of Oxford); Sciacchitano, A. (TU Delft Aerodynamics); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2021","Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.","signal processing; measurement systems; optical measurements; machine learning; biomedical applications; environmental applications; industrial applications; Biomedical applications; Environmental applications; Industrial applications; Measurement systems; Machine learning; Optical measurements; Signal processing","en","review","","","","","","","","","","","Aerodynamics","","",""
"uuid:31558516-3e8f-467c-bc83-495cf154653c","http://resolver.tudelft.nl/uuid:31558516-3e8f-467c-bc83-495cf154653c","A piecewise deterministic Monte Carlo method for diffusion bridges","Bierkens, G.N.J.C. (TU Delft Statistics); Grazzi, S. (TU Delft Statistics); van der Meulen, F.H. (TU Delft Statistics); Schauer, M.R. (TU Delft Statistics; Chalmers University of Technology; University of Gothenburg)","","2021","We introduce the use of the Zig-Zag sampler to the problem of sampling conditional diffusion processes (diffusion bridges). The Zig-Zag sampler is a rejection-free sampling scheme based on a non-reversible continuous piecewise deterministic Markov process. Similar to the Lévy–Ciesielski construction of a Brownian motion, we expand the diffusion path in a truncated Faber–Schauder basis. The coefficients within the basis are sampled using a Zig-Zag sampler. A key innovation is the use of the fully local algorithm for the Zig-Zag sampler that allows to exploit the sparsity structure implied by the dependency graph of the coefficients and by the subsampling technique to reduce the complexity of the algorithm. We illustrate the performance of the proposed methods in a number of examples.","Conditional diffusion; Diffusion bridge; Diffusion process; Faber–Schauder basis; High-dimensional simulation; Intractable target density; Local Zig-Zag sampler; Piecewise deterministic Monte Carlo","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:11e250c6-f7c0-40ad-a6f1-8b18fc46a849","http://resolver.tudelft.nl/uuid:11e250c6-f7c0-40ad-a6f1-8b18fc46a849","GeoMicro3D: A novel numerical model for simulating the reaction process and microstructure formation of alkali-activated slag","Zuo, Y. (TU Delft Materials and Environment; Huazhong University of Science and Technology); Ye, G. (TU Delft Materials and Environment)","","2021","For the first time, this study developed a novel model, named GeoMicro3D, to simulate the reaction process and microstructure formation of alkali-activated slag. The GeoMicro3D model consists of four modules that are designed to simulate, respectively: (i) the initial spatial distribution of real-shape slag particles in alkaline activator, (ii) the dissolution of slag and diffusion of ions via the transition state theory and lattice Boltzmann method, respectively, (iii) the spatial distribution of reaction products using a nucleation probability theory, and (iv) the chemical reactions with thermodynamic modelling. Afterwards the GeoMicro3D model was implemented and verified. The simulation results were discussed and compared with the relevant experimental data and thermodynamic calculation results using GEMS. A good agreement was found in the comparisons, showing the strong simulation capability of GeoMicro3D.","Alkali-activated slag; GeoMicro3D; Microstructure formation; Numerical simulation; Reaction process","en","journal article","","","","","","Accepted author manuscript","","2022-12-17","","","Materials and Environment","","",""
"uuid:b8b0fedb-7d8c-418e-9f8c-4629b3a97198","http://resolver.tudelft.nl/uuid:b8b0fedb-7d8c-418e-9f8c-4629b3a97198","On-the-fly construction of surrogate constitutive models for concurrent multiscale mechanical analysis through probabilistic machine learning","Rocha, I.B.C.M. (TU Delft Applied Mechanics); Kerfriden, P. (Géosciences; Cardiff University); van der Meer, F.P. (TU Delft Applied Mechanics)","","2021","Concurrent multiscale finite element analysis (FE2) is a powerful approach for high-fidelity modeling of materials for which a suitable macroscopic constitutive model is not available. However, the extreme computational effort associated with computing a nested micromodel at every macroscopic integration point makes FE2 prohibitive for most practical applications. Constructing surrogate models able to efficiently compute the microscopic constitutive response is therefore a promising approach in enabling concurrent multiscale modeling. This work presents a reduction framework for adaptively constructing surrogate models for FE2 based on statistical learning. The nested micromodels are replaced by a machine learning surrogate model based on Gaussian Processes (GP). The need for offline data collection is bypassed by training the GP models online based on data coming from a small set of fully-solved anchor micromodels that undergo the same strain history as their associated macroscopic integration points. The Bayesian formalism inherent to GP models provides a natural tool for online uncertainty estimation through which new observations or inclusion of new anchor micromodels are triggered. The surrogate constitutive manifold is constructed with as few micromechanical evaluations as possible by enhancing the GP models with gradient information and the solution scheme is made robust through a greedy data selection approach embedded within the conventional finite element solution loop for nonlinear analysis. The sensitivity to model parameters is studied with a tapered bar example with plasticity and the framework is further demonstrated with the elastoplastic analysis of a plate with multiple cutouts and with a crack growth example for mixed-mode bending. Although not able to handle non-monotonic strain paths in its current form, the framework is found to be a promising approach in reducing the computational cost of FE2, with significant efficiency gains being obtained without resorting to offline training.","Active learning; Concurrent multiscale; Gaussian Processes (GP); Probabilistic learning; Surrogate modeling","en","journal article","","","","","","","","","","","Applied Mechanics","","",""
"uuid:d1c13cdb-6b56-408b-b459-a1e22ec2d072","http://resolver.tudelft.nl/uuid:d1c13cdb-6b56-408b-b459-a1e22ec2d072","Using natural language processing to explore heterogeneity in moral terminology in palliative care consultations","van den Broek-Altenburg, Eline (University of Vermont); Gramling, Robert (University of Vermont); Gothard, Kelly (University of Vermont); Kroesen, M. (TU Delft Transport and Logistics); Chorus, C.G. (TU Delft Transport and Logistics)","","2021","Background: High quality serious illness communication requires good understanding of patients’ values and beliefs for their treatment at end of life. Natural Language Processing (NLP) offers a reliable and scalable method for measuring and analyzing value- and belief-related features of conversations in the natural clinical setting. We use a validated NLP corpus and a series of statistical analyses to capture and explain conversation features that characterize the complex domain of moral values and beliefs. The objective of this study was to examine the frequency, distribution and clustering of morality lexicon expressed by patients during palliative care consultation using the Moral Foundations NLP Dictionary. Methods: We used text data from 231 audio-recorded and transcribed inpatient PC consultations and data from baseline and follow-up patient questionnaires at two large academic medical centers in the United States. With these data, we identified different moral expressions in patients using text mining techniques. We used latent class analysis to explore if there were qualitatively different underlying patterns in the PC patient population. We used Poisson regressions to analyze if individual patient characteristics, EOL preferences, religion and spiritual beliefs were associated with use of moral terminology. Results: We found two latent classes: a class in which patients did not use many expressions of morality in their PC consultations and one in which patients did. Age, race (white), education, spiritual needs, and whether a patient was affiliated with Christianity or another religion were all associated with membership of the first class. Gender, financial security and preference for longevity-focused over comfort focused treatment near EOL did not affect class membership. Conclusions: This study is among the first to use text data from a real-world situation to extract information regarding individual foundations of morality. It is the first to test empirically if individual moral expressions are associated with individual characteristics, attitudes and emotions.","Conversation science; Decision-making; End-of-life; Latent class analysis; Morality; Natural language processing; Palliative care; Poisson regression","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:f3fbf2d8-83e0-43e9-ac3c-fce565ff1e5e","http://resolver.tudelft.nl/uuid:f3fbf2d8-83e0-43e9-ac3c-fce565ff1e5e","Energy, exergy, and environmental analyses of renewable hydrogen production through plasma gasification of microalgal biomass","Kuo, P.C. (TU Delft Energy Technology); Illathukandy, Biju (Indian Institute of Technology Delhi; Government Engineering College); Wu, Wei (National Cheng Kung University); Chang, Jo Shu (National Cheng Kung University; Tunghai University)","","2021","In this study, an energy, exergy, and environmental (3E) analyses of a plasma-assisted hydrogen production process from microalgae is investigated. Four different microalgal biomass fuels, namely, raw microalgae (RM) and three torrefied microalgal fuels (TM200, TM250, and TM300), are used as the feedstock for steam plasma gasification to generate syngas and hydrogen. The effects of steam-to-biomass (S/B) ratio on the syngas and hydrogen yields, and energy and exergy efficiencies of plasma gasification (ηEn,PG, ηEx,PG) and hydrogen production (ηEn,H2, ηEx,H2) are taken into account. Results show that the optimal S/B ratios of RM, TM200, TM250, and TM300 are 0.354, 0.443, 0.593, and 0.760 respectively, occurring at the carbon boundary points (CBPs), where the maximum values of ηEn,PG, ηEx,PG, ηEn,H2, and ηEx,H2 are also achieved. At CBPs, torrefied microalgae as feedstock lower the ηEn,PG, ηEx,PG, ηEn,H2, and ηEx,H2 because of their improved calorific value after undergoing torrefaction, and the increased plasma energy demand compared to the RM. However, beyond CBPs the torrefied feedstock displays better performance. A comparative life cycle analysis indicates that TM300 exhibits the highest greenhouse gases (GHG) emissions and the lowest net energy ratio (NER), due to the indirect emissions associated with electricity consumption.","3E analyses; CO emissions; Hydrogen production; Microalgal biomass; Plasma gasification; Process simulation","en","journal article","","","","","","","","","","","Energy Technology","","",""
"uuid:5f9b2836-9aa9-44a5-8017-124b2793f7f3","http://resolver.tudelft.nl/uuid:5f9b2836-9aa9-44a5-8017-124b2793f7f3","A novel machine learning application: Water quality resilience prediction Model","Imani, Maryam (Anglia Ruskin University); Hasan, Md Mahmudul (Anglia Ruskin University); Bittencourt, Luiz Fernando (University of Campinas); McClymont, Kent (Anglia Ruskin University); Kapelan, Z. (TU Delft Sanitary Engineering)","","2021","Resilience-informed water quality management embraces the growing environmental challenges and provides greater accuracy by unpacking the systems' characteristics in response to failure conditions in order to identify more effective opportunities for intervention. Assessing the resilience of water quality requires complex analysis of influential parameters which can be challenging, time consuming and costly to compute. It may also require building detailed conceptual and/or physically process-based models that are difficult to build, calibrate and validate. This study utilises Artificial Neural Network (ANN) to develop a novel application to predict water quality resilience to simplify resilience evaluation. The Fuzzy Analytic Hierarchy Process method is used to rank water basins based on their level of resilience and to identify the ones that demand prompt restoration strategies. The commonly used ‘magnitude * duration of being in failure state’ quantification method has been used to formulate and evaluate resilience. A 17-years long water quality dataset from the 22 water basins in the State of São Paulo, Brazil, was used to train and test the ANN model. The overall agreement between the measured and simulated WQI resilience values is satisfactory and hence, can be used by planners and decision makers for improved water management. Moreover, comparative analyses show similarities and differences between the ‘level of criticalities’ reported in each zone by Environment Agency of the state of São Paulo (CETESB) and by the resilience model in this study.","Analytic hierarchy process; Artificial neural network; Fuzzy logic; Machine learning; Resilience; Triangular fuzzy number; Water quality","en","journal article","","","","","","Accepted Author Manuscript","","2022-01-14","","","Sanitary Engineering","","",""
"uuid:d4ca477a-a150-4013-a234-ef6f1afdec56","http://resolver.tudelft.nl/uuid:d4ca477a-a150-4013-a234-ef6f1afdec56","Isolation and Identification of Organics-Degrading Bacteria From Gas-to-Liquid Process Water","Surkatti, Riham (Qatar University); Al Disi, Zulfa A. (Qatar University); El-Naas, Muftah H. (Qatar University); Zouari, Nabil (Qatar University); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Onwusogh, Udeogu (Qatar Shell RTC, Doha)","","2021","The gas-to-liquid (GTL) process generates considerable amounts of wastewater that are highly acidic and characterized by its high chemical oxygen demand (COD) content, due to the presence of several organic pollutants, such as alcohols, ketones, aldehydes, and fatty acids. The presence of these organics in the process water may lead to adverse effect on the environment and aquatic life. Thus, it is necessary to reduce the COD content of GTL process water to an acceptable limit before discharging or reusing the treated water. Due to several advantages, biological treatment is often utilized as the main step in GTL process water treatment plants. In order to have a successful biotreatment process, it is required to choose effective and suitable bacterial strains that have the ability to degrade the organic pollutants in GTL process water. In this work, bacterial strains were isolated from the GTL process water, identified by 16S rRNA gene sequencing and then used in the biodegradation process. The detailed identification of the strains confirmed the presence of three organics-degrading bacteria identified as Alcaligenes faecalis, Stenotrophomonas sp., and Ochrobactrum sp. Furthermore, biodegradation experiments were carried out and confirmed that the pure culture as well as the mixed culture consortium of the bacterial strains has the ability to reduce the organic pollutants in GTL process water. However, the growth rate and biodegradation efficiency depend on the type of strains and the initial COD content. Indeed, the removal percentage and growth rate were enhanced after 7 days for all cultures and resulted in COD reduction up to 60%. Moreover, the mixed culture of bacterial strains can tolerate and treat GTL process water with a variety of ranges of COD contents.","biodegradation; COD reduction; GTL process water; identification; isolation","en","journal article","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:bac75f0a-9180-4a3d-879f-c44455920776","http://resolver.tudelft.nl/uuid:bac75f0a-9180-4a3d-879f-c44455920776","Evaluation of a full-scale suspended sludge deammonification technology coupled with an hydrocyclone to treat thermal hydrolysis dewatering liquors","Ochs, Pascal (Cranfield University; Thames Water Utilities Ltd.); Martin, Benjamin D. (Thames Water Utilities Ltd.); Germain, Eve (Thames Water Utilities Ltd.); Wu, Zhuoying (Imperial College London); Lee, Po Heng (Imperial College London); Stephenson, Tom (Cranfield University); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Soares, Ana (Cranfield University)","","2021","Suspended sludge deammonification technologies are frequently applied for sidestream ammonia removal from dewatering liquors resulting from a thermal hydrolysis anaerobic digestion (THP/AD) process. This study aimed at optimizing the operation, evaluate the performance and stability of a full-scale suspended sludge continuous stirred tank reactor (S-CSTR) with a hydrocy-clone for anaerobic ammonia oxidizing bacteria (AMX) biomass separation. The S-CSTR operated at a range of nitrogen loading rates of 0.08–0.39 kg N m−3 d−1 displaying nitrogen removal efficiencies of 75–89%. The hydrocyclone was responsible for retaining 56–83% of the AMX biomass and the washout of ammonia oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB) was two times greater than AMX. The solid retention time (SRT) impacted on NOB washout, that ranged from 0.02–0.07 d−1 . Additionally, it was demonstrated that an SRT of 11–13 d was adequate to wash-out NOB. Microbiome analysis revealed a higher AMX abundance (Candidatus scalindua) in the reactor through the action of the hydrocyclone. Overall, this study established the optimal operational envelope for deammonification from THP/AD dewatering liquors and the role of the hydrocyclone towards maintaining AMX in the S-CSTR and hence obtain process stability.","Deammonification; Hydrocyclone; Partial nitrita-tion/anammox; Sidestream; Thermal hydrolysis process","en","journal article","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:eaf7648b-d661-4396-b11d-efdff943f8b9","http://resolver.tudelft.nl/uuid:eaf7648b-d661-4396-b11d-efdff943f8b9","A multinomial process tree for reliability assessment of machinery in autonomous ships","Abaei, M.M. (TU Delft Ship Design, Production and Operations); Hekkenberg, R.G. (TU Delft Ship Design, Production and Operations); BahooToroody, Ahmad (Aalto University)","","2021","Maritime Autonomous Surface Ships have received a significant amount of attention in recent projects. They promise a reduction in marine accidents and mitigation of human errors. Most of the ongoing research effort is directed toward autonomous navigation and cybersecurity. However, the importance of a machinery plant in the engine room that can operate reliably without human attendance is hardly investigated. To prevent failures in such systems and extend the interval between required human interventions, it is essential to improve their reliability. This paper aims to present a systematic approach to evaluate the reliability of an autonomous system under the influence of uncertain disruptions and to predict failure rates of unattended machinery plants. A Multinomial Process Tree is used to model failures in the main failure-sensitive components. Hierarchical Bayesian Inference is adopted to facilitate the prediction of frequencies of disruptive events and estimate the entire system's failure rate. The outcome of this research enables design strategies to improve the reliability of autonomous ships and prevent Fatal Technical Failure during the operation. This allows assessing whether a given machinery plant is sufficiently reliable to be used on unmanned ships. A case study is considered to demonstrate the application of the presented method.","Autonomous shipping; Bayesian inference; Machinery plant; Multinomial process tree; Reliability engineering","en","journal article","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:0713e10e-31f1-499d-89dc-940e50faef3a","http://resolver.tudelft.nl/uuid:0713e10e-31f1-499d-89dc-940e50faef3a","Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints","Andriotis, C. (TU Delft Structural Design & Mechanics); Papakonstantinou, K. G. (The Pennsylvania State University)","","2021","Determination of inspection and maintenance policies for minimizing long-term risks and costs in deteriorating engineering environments constitutes a complex optimization problem. Major computational challenges include the (i) curse of dimensionality, due to exponential scaling of state/action set cardinalities with the number of components; (ii) curse of history, related to exponentially growing decision-trees with the number of decision-steps; (iii) presence of state uncertainties, induced by inherent environment stochasticity and variability of inspection/monitoring measurements; (iv) presence of constraints, pertaining to stochastic long-term limitations, due to resource scarcity and other infeasible/undesirable system responses. In this work, these challenges are addressed within a joint framework of constrained Partially Observable Markov Decision Processes (POMDP) and multi-agent Deep Reinforcement Learning (DRL). POMDPs optimally tackle (ii)-(iii), combining stochastic dynamic programming with Bayesian inference principles. Multi-agent DRL addresses (i), through deep function parametrizations and decentralized control assumptions. Challenge (iv) is herein handled through proper state augmentation and Lagrangian relaxation, with emphasis on life-cycle risk-based constraints and budget limitations. The underlying algorithmic steps are provided, and the proposed framework is found to outperform well-established policy baselines and facilitate adept prescription of inspection and intervention actions, in cases where decisions must be made in the most resource- and risk-aware manner.","Constrained stochastic optimization; Decentralized multi-agent control; Deep reinforcement learning; Inspection and maintenance planning; Partially observable Markov decision processes; System risk and reliability","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-09-11","","","Structural Design & Mechanics","","",""
"uuid:4009f4ab-b96c-4a1a-b963-94237c1497be","http://resolver.tudelft.nl/uuid:4009f4ab-b96c-4a1a-b963-94237c1497be","Reduction of cost, energy and emissions of the formalin production process via methane steam reforming","Puhar, Jan (University of Maribor); Vujanović, Annamaria (University of Maribor); Awad, P.W.A.A. (TU Delft ChemE/Delft Ingenious Design); Čuček, Lidija (University of Maribor)","","2021","Production of formalin, which is among the highest production volume chemicals, is highly energy-intensive; thus, reduction of energy use is very important in reducing cost and emissions. The aim of this and its larger overall research is to systemically analyze how to improve sustainability of processes producing formalin as an intermediate or final product. In this part of the work, energy consumption requirements are analyzed for the conventional formalin production process via methane steam reforming, where opportunities for energy consumption reduction are identified. This work will serve as a base case for further investigation of alternative formalin production pathways. To achieve energy savings, heat integration technology by combined pinch analysis and mathematical programming is applied. The formalin production process is simulated using Aspen HYSYS, and heat integration of the production process was performed based on simulated design using GAMS software. Economic and environmental footprint analyses were performed for both non-integrated and integrated designs. Results show that heat integration reduces heat consumption by around 39%, leading to a saving of 11% in capital cost and turning annual operating cost into positive revenue. Heat integration also improves the environmental aspect, where a 7-22% reduction in selected environmental footprints is achieved.","Economic performance; Energy consumption reduction; Environmental footprint analysis; Formalin production process; Heat integration; Mathematical programming; Pinch analysis","en","journal article","","","","","","","","","","","ChemE/Delft Ingenious Design","","",""
"uuid:f0ad8671-60bb-4b20-bf44-5429e7839c04","http://resolver.tudelft.nl/uuid:f0ad8671-60bb-4b20-bf44-5429e7839c04","Multisine frequency modulation of intra-epidermal electric pulse sequences: A novel tool to study nociceptive processing","van den Berg, Boudewijn (University of Twente); Manoochehri, M. (TU Delft Biomechatronics & Human-Machine Control); Kasting, Mindy (Student TU Delft); Schouten, A.C. (TU Delft Biomechatronics & Human-Machine Control; Northwestern University Feinberg School of Medicine; University of Twente); van der Helm, F.C.T. (TU Delft Biomechatronics & Human-Machine Control; Northwestern University Feinberg School of Medicine); Buitenweg, Jan R. (University of Twente)","","2021","A sustained sensory stimulus with a periodic variation of intensity creates an electrophysiological brain response at associated frequencies, referred to as the steady-state evoked potential (SSEP). The SSEPs elicited by the periodic stimulation of nociceptors in the skin may represent activity of a brain network that is primarily involved in nociceptive processing. Exploring the behavior of this network could lead to valuable insights regarding the pathway from nociceptive stimulus to pain perception. We present a method to directly modulate the pulse rate of nociceptive afferents in the skin with a multisine waveform through intra-epidermal electric stimulation. The technique was demonstrated in healthy volunteers. Each subject was stimulated using a pulse sequence modulated by a multisine waveform of 3, 7 and 13 Hz. The EEG was analyzed for the presence of the base frequencies and associated (sub)harmonics. Topographies showed significant central and contralateral SSEP responses at 3, 7 and 13 Hz in respectively 7, 4 and 3 out of the 9 participants included for analysis. As such, we found that intra-epidermal stimulation with a multisine frequency modulated pulse sequence can generate nociceptive SSEPs. The possibility to stimulate the nociceptive system using multisine frequency modulated pulses offers novel opportunities to study the temporal dynamics of nociceptive processing.","Electroencephalography; Intra-epidermal stimulation; Nociceptive processing; Nonlinearity; Steady-state evoked potentials; System identification; Time-delay","en","journal article","","","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:59a6d987-92e9-4198-bd32-fd5ddcb211b5","http://resolver.tudelft.nl/uuid:59a6d987-92e9-4198-bd32-fd5ddcb211b5","Microwave heating in heterogeneous catalysis: Modelling and design of rectangular traveling-wave microwave reactor","Yan, P. (TU Delft Intensified Reaction and Separation Systems; Tianjin University); Stankiewicz, A.I. (TU Delft Complex Fluid Processing); Eghbal Sarabi, F. (TU Delft Complex Fluid Processing); Nigar, H. (TU Delft Complex Fluid Processing)","","2021","Microwave irradiation can intensify catalytic chemistry by selective and controlled microwave-catalytic packed-bed interaction. However, turning it to reality from laboratory to practical applications is hindered by challenges in the reactor design and scale-up. Here, we present a novel, rectangular traveling-wave microwave reactor (RTMR) and provide an easy-to-handle, 3-step design procedure of such reactor. The multiphysics model couples the electromagnetic field, heat transfer, and fluid dynamics in order to optimize the geometrical parameters and operational conditions for the microwave-assisted heterogeneous catalysis. The results show that the microwave energy input/output ports should be well-positioned and matched; otherwise, it would significantly decrease energy efficiency. In terms of microwave transmission, the RTMR presents a mix between the standing wave and the traveling-wave systems. Gas space velocity and input temperature significantly affect the temperature profile, and gas–solid temperature can present no significant difference under certain gas–solid contact.","Design and optimization; Microwave heating; Microwave reactor; Microwave-assisted heterogeneous catalysis; Multiphysics modeling; Process intensification","en","journal article","","","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:04e9184a-cbef-4712-b8f8-e88ba757d7e6","http://resolver.tudelft.nl/uuid:04e9184a-cbef-4712-b8f8-e88ba757d7e6","Predictive analytical modelling and experimental validation of processing maps in additive manufacturing of nitinol alloys","Zhu, Jia-Ning (TU Delft Team Vera Popovich); Borisov, Evgenii (Peter the Great Saint-Petersburg Polytechnic University); Liang, X. (TU Delft Team Marcel Hermans); Farber, Eduard (Peter the Great Saint-Petersburg Polytechnic University); Hermans, M.J.M. (TU Delft Team Marcel Hermans); Popovich, V. (TU Delft Team Vera Popovich; Peter the Great Saint-Petersburg Polytechnic University)","","2021","Nitinol (NiTi) shape memory alloys fabricated by Laser Powder Bed Fusion (L-PBF) Additive Manufacturing (AM) have attracted much attention in recent years, as compared with conventional manufacturing processes it allows to produce Nitinol parts with high design complexity. Avoidance of defects during L-PBF is crucial for the production of high quality Nitinol parts. In this study, analytical models predicting melt pool dimensions and defect formation criteria were synergistically used to develop processing maps demonstrating boundary conditions for the formation of such defects, as balling, keyhole-induced pores, and lack of fusion. Experimental validation has demonstrated that this method can provide an accurate estimation and guide manufacturability of defect-free Nitinol alloys. Moreover, the crack formation phenomena were experimentally analysed, which showed that a low linear energy density (El) should be chosen to avoid cracks in the optimized process windows. Based on model predictions and experimental calibrations, Nitinol samples with a relative density of more than 99% were successfully fabricated.","Analytical model; Defect formation; Laser powder bed fusion; Nitinol alloys; Process optimization","en","journal article","","","","","","","","","","","Team Vera Popovich","","",""
"uuid:8c543ec1-3ac9-4e84-b365-b210bf10b521","http://resolver.tudelft.nl/uuid:8c543ec1-3ac9-4e84-b365-b210bf10b521","A process model for collaboration in circular oriented innovation","Brown, P.D. (TU Delft Circular Product Design); Von Daniels, C. (Circle Economy & Sustainable Finance Lab); Bocken, N.M.P. (TU Delft Responsible Marketing and Consumer Behavior; Lund University); Balkenende, R. (TU Delft Circular Product Design)","","2021","Circular oriented innovation commonly requires collaboration. Yet, to date, circular research lacks empirical investigation into collaborative processes. Collaborative processes are, however, highly researched within strategic management literature, thus offering valuable insights. The purpose of this paper is to investigate, identify and order the processes that companies undertake when designing and implementing collaborations for circular oriented innovation. Firstly, we integrate disparate strategic management literature to identify collaborative process ‘know-how’ and relevant ‘building blocks’. Secondly, we generate practice-based insights, via semi-structured interviews and desk-research, across three research cycles to understand how companies collaborate within circular oriented innovation. Theoretical contributions stem from the assessment and integration of strategic management collaborative process knowledge into the circular context. Managerial contributions derive from the process model that describes how to build collaborative circular oriented innovation. Furthermore, the principal result is the empirical investigation and identification of collaborative circular oriented innovation challenges. Challenges relate to how to; 1) formulate an initial ‘circular proposition’, 2) involve the ‘right’ people, 3) align upon a shared circular purpose, 4) develop circular oriented governance and decision-making, and 5) develop a circular oriented value capture model focused on collective outcomes. These form the basis for our proposed future research agenda. This research agenda aims to stimulate researchers and practitioners to further demystify collaborative processes to accelerate the transition towards a circular economy.","Circular business models; Circular economy; Circular oriented innovation; Collaboration; Collaborative innovation; Process model","en","journal article","","","","","","","","","","","Circular Product Design","","",""
"uuid:153ad0ce-e040-4945-90e5-9c99b4073494","http://resolver.tudelft.nl/uuid:153ad0ce-e040-4945-90e5-9c99b4073494","Digital biomarkers and algorithms for detection of atrial fibrillation using surface electrocardiograms: A systematic review: Digital Biomarkers for AF in Surface ECGs","Wesselius, F.J. (TU Delft Biomechanical Engineering; Erasmus MC); van Schie, M.S. (TU Delft Biomechanical Engineering; Erasmus MC); de Groot, N.M.S. (TU Delft Signal Processing Systems; TU Delft Biomechanical Engineering; Erasmus MC); Hendriks, R.C. (TU Delft Signal Processing Systems)","","2021","Aims: Automated detection of atrial fibrillation (AF) in continuous rhythm registrations is essential in order to prevent complications and optimize treatment of AF. Many algorithms have been developed to detect AF in surface electrocardiograms (ECGs) during the past few years. The aim of this systematic review is to gain more insight into these available classification methods by discussing previously used digital biomarkers and algorithms and make recommendations for future research. Methods: On the 14th of September 2020, the PubMed database was searched for articles focusing on algorithms for AF detection in ECGs using the MeSH terms Atrial Fibrillation, Electrocardiography and Algorithms. Articles which solely focused on differentiation of types of rhythm disorders or prediction of AF termination were excluded. Results: The search resulted in 451 articles, of which 130 remained after full-text screening. Not only did the amount of research on methods for AF detection increase over the past years, but a trend towards more complex classification methods is observed. Furthermore, three different types of features can be distinguished: atrial features, ventricular features, and signal features. Although AF is an atrial disease, only 22% of the described methods use atrial features. Conclusion: More and more studies focus on improving accuracy of classification methods for AF in ECGs. As a result, algorithms become increasingly complex and less well interpretable. Only a few studies focus on detecting atrial activity in the ECG. Developing innovative methods focusing on detection of atrial activity might provide accurate classifiers without compromising on transparency.","Algorithms; Atrial fibrillation; Classification; ECG signal Processing; Machine learning; Telemetry","en","review","","","","","","","","","","Biomechanical Engineering","Signal Processing Systems","","",""
"uuid:32c745a3-4818-4580-8deb-22e9d57c5a9e","http://resolver.tudelft.nl/uuid:32c745a3-4818-4580-8deb-22e9d57c5a9e","Translating the invisible: Governing underground utilities in the Amsterdam airport Schiphol terminal project","Biersteker, Erwin (Vrije Universiteit Amsterdam); Koppenjan, Joop (Erasmus Universiteit Rotterdam); van Marrewijk, A.H. (TU Delft Design & Construction Management; Vrije Universiteit Amsterdam)","","2021","Governing material conditions—including physical, material subjects such as machines, build constructions, construction materials, and subsoils—is a crucial challenge within projects and is underrepresented in project governance theory. To clarify the relationship between project governance and materiality, we draw on translation theory, which is essentially about the reinterpretation, appropriation, and representation of interests related to materials. This paper studies the challenges of governing the underground during the construction of the new terminal at Amsterdam Schiphol Airport. The findings show that, during the project life cycle, the translation of the underground by project actors hampered the necessary relocation of utilities in this project. This eventually resulted in delays and unforeseen costs. This translation is explained by a combination of the governance of the project, strategic interactions of project actors, and the characteristics and context of the material conditions. We contribute to project governance studies by demonstrating the usefulness of translation theory to better understand the mechanisms at play in governing underrepresented material conditions in infrastructure projects.","Infrastructure; Project governance; Risks; Translation process; Underground; Utilities","en","journal article","","","","","","","","","","","Design & Construction Management","","",""
"uuid:13a2f9bf-468f-4a01-810a-048d4bb22750","http://resolver.tudelft.nl/uuid:13a2f9bf-468f-4a01-810a-048d4bb22750","Strategy synthesis for partially-known switched stochastic systems","Jackson, John (University of Colorado); Laurenti, L. (TU Delft Team Luca Laurenti); Frew, Eric (University of Colorado); Lahijanian, Morteza (University of Colorado)","","2021","We present a data-driven framework for strategy synthesis for partially-known switched stochastic systems. The properties of the system are specified using linear temporal logic (LTL) over finite traces (LTLf), which is as expressive as LTL and enables interpretations over finite behaviors. The framework first learns the unknown dynamics via Gaussian process regression. Then, it builds a formal abstraction of the switched system in terms of an uncertain Markov model, namely an Interval Markov Decision Process (IMDP), by accounting for both the stochastic behavior of the system and the uncertainty in the learning step. Then, we synthesize a strategy on the resulting IMDP that maximizes the satisfaction probability of the LTLf specification and is robust against all the uncertainties in the abstraction. This strategy is then refined into a switching strategy for the original stochastic system. We show that this strategy is near-optimal and provide a bound on its distance (error) to the optimal strategy. We experimentally validate our framework on various case studies, including both linear and non-linear switched stochastic systems.","formal synthesis; gaussian process regression; safe autonomy; switched stochastic systems; uncertain markov decision processes","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Team Luca Laurenti","","",""
"uuid:e2fce61d-8cd0-4fee-b187-e32674b3a8d8","http://resolver.tudelft.nl/uuid:e2fce61d-8cd0-4fee-b187-e32674b3a8d8","The Dual Graph Shift Operator: Identifying the Support of the Frequency Domain","Leus, G.J.T. (TU Delft Signal Processing Systems); Segarra, Santiago (Rice University); Ribeiro, Alejandro (University of Pennsylvania); Marques, Antonio G. (King Juan Carlos University)","","2021","Contemporary data is often supported by an irregular structure, which can be conveniently captured by a graph. Accounting for this graph support is crucial to analyze the data, leading to an area known as graph signal processing (GSP). The two most important tools in GSP are the graph shift operator (GSO), which is a sparse matrix accounting for the topology of the graph, and the graph Fourier transform (GFT), which maps graph signals into a frequency domain spanned by a number of graph-related Fourier-like basis vectors. This alternative representation of a graph signal is denominated the graph frequency signal. Several attempts have been undertaken in order to interpret the support of this graph frequency signal, but they all resulted in a one-dimensional interpretation. However, if the support of the original signal is captured by a graph, why would the graph frequency signal have a simple one-dimensional support? Departing from existing work, we propose an irregular support for the graph frequency signal, which we coin dual graph. A dual GSO leads to a better interpretation of the graph frequency signal and its domain, helps to understand how the different graph frequencies are related and clustered, enables the development of better graph filters and filter banks, and facilitates the generalization of classical SP results to the graph domain.","Dual graph shift operator; Frequency support; Graph Fourier transform; Graph signal processing","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:ffa2ba06-58a7-4ffa-8e19-9b095051fa98","http://resolver.tudelft.nl/uuid:ffa2ba06-58a7-4ffa-8e19-9b095051fa98","Flux large deviations of weakly interacting jump processes via well-posedness of an associated Hamilton-Jacobi equation","Kraaij, R.C. (TU Delft Applied Probability)","","2021","We establish uniqueness for a class of first-order Hamilton-Jacobi equations with Hamiltonians that arise from the large deviations of the empirical measure and empirical flux pair of weakly interacting Markov jump processes. As a corollary, we obtain such a large deviation principle in the context of weakly interacting processes with time-periodic rates in which the period-length converges to 0.","Empirical measure and flux; Hamilton-jacobi equation; Large deviations; Weakly interacting jump processes","en","journal article","","","","","","Accepted author manuscript","","","","","Applied Probability","","",""
"uuid:8b3691db-f12c-4d9b-8eee-6dcbf9fe1fec","http://resolver.tudelft.nl/uuid:8b3691db-f12c-4d9b-8eee-6dcbf9fe1fec","First Experiments and Commissioning of the ORCHID Nozzle Test Section","Beltrame, F. (TU Delft Flight Performance and Propulsion); Head, A.J. (TU Delft Flight Performance and Propulsion); de Servi, C.M. (TU Delft Flight Performance and Propulsion; Flemish Institute for Technological Research); Pini, M. (TU Delft Flight Performance and Propulsion); Schrijer, F.F.J. (TU Delft Aerodynamics); Colonna, Piero (TU Delft Flight Performance and Propulsion)","Pini, M. (editor)","2021","This paper reports one of the initial NICFD experiments in the nozzle test section of the ORCHID aimed at providing accurate data for the validation of flow solvers, albeit, at this stage of the research, the focus is limited to inviscid phenomena. Notably, a series of schlieren photographs displaying Mach waves in the supersonic flow of the dense vapor of siloxane MM were obtained and are documented here for the commissioning experiment, namely, for inlet conditions corresponding to a stagnation temperature and pressure of T0=252∘C and P0=18.4bara. At these inlet conditions the compressibility factor of the fluid is Z0= 0.58. The digital processing of the schlieren images allowed to estimate multiple angles formed by the Mach waves stemming from the upper and lower nozzle surfaces because of the infinitesimal density perturbations generated by the, albeit small, roughness of the metal surfaces. These values are related to the local Mach number by a simple geometric relation. Moreover, the total expanded uncertainty in the Mach number was computed. This information together with the estimate of the average Mach number was used for a first assessment of the capability of evaluating NICFD effects occurring in a dense organic vapor flow of MM by comparison with the results of CFD simulations. The outcome of the comparison was satisfactory. It can thus be inferred that the nozzle test section has been commissioned and it is ready for experimental campaigns in which its full potential in terms of measurements accuracy, repeatability, and operational flexibility will be exploited.","Data processing; Error identification; Schlieren measurements; Uncertainty estimation","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-08-15","","","Flight Performance and Propulsion","","",""
"uuid:914070cb-d4fe-4bad-8064-bc83154b895b","http://resolver.tudelft.nl/uuid:914070cb-d4fe-4bad-8064-bc83154b895b","Synthesizing Spoken Descriptions of Images","Wang, X. (TU Delft Multimedia Computing; Xi’an Jiaotong University); van der Hout, Justin (Student TU Delft); Zhu, Jihua (Xi’an Jiaotong University); Hasegawa-Johnson, Mark (University of Illinois at Urbana-Champaign); Scharenborg, O.E. (TU Delft Multimedia Computing)","","2021","Image captioning technology has great potential in many scenarios. However, current text-based image captioning methods cannot be applied to approximately half of the world's languages due to these languages’ lack of a written form. To solve this problem, recently the image-to-speech task was proposed, which generates spoken descriptions of images bypassing any text via an intermediate representation consisting of phonemes (image-to-phoneme). Here, we present a comprehensive study on the image-to-speech task in which, 1) several representative image-to-text generation methods are implemented for the image-to-phoneme task, 2) objective metrics are sought to evaluate the image-to-phoneme task, and 3) an end-to-end image-to-speech model that is able to synthesize spoken descriptions of images bypassing both text and phonemes is proposed. Extensive experiments are conducted on the public benchmark database Flickr8k. Results of our experiments demonstrate that 1) State-of-the-art image-to-text models can perform well on the image-to-phoneme task, and 2) several evaluation metrics, including BLEU3, BLEU4, BLEU5, and ROUGE-L can be used to evaluate image-to-phoneme performance. Finally, 3) end-to-end image-to-speech bypassing text and phonemes is feasible.","Speech processing; Image-to-speech generation; multimodal modelling; speech synthesis; cross-modal captioning","en","journal article","","","","","","Accepted author manuscript","","","","","Multimedia Computing","","",""
"uuid:92d4a954-0e60-4a51-8b5f-f3a42627cfcf","http://resolver.tudelft.nl/uuid:92d4a954-0e60-4a51-8b5f-f3a42627cfcf","Unfolding the early fatigue damage process for CFRP cross-ply laminates","Li, X. (TU Delft Structural Integrity & Composites); Kupski, J.A. (TU Delft Structural Integrity & Composites); Teixeira De Freitas, S. (TU Delft Structural Integrity & Composites); Benedictus, R. (TU Delft Structural Integrity & Composites); Zarouchas, D. (TU Delft Structural Integrity & Composites)","","2020","This study investigates the early fatigue damage of cross-ply carbon/epoxy laminates. The aim is to unfold the damage accumulation process, understand the interaction between different damage mechanisms, and quantify their contribution to stiffness degradation. Tension-tension fatigue tests were performed, while edge observation and DIC technique monitored the damage evolution. It was found that different accumulation process and interactive levels between transverse matrix cracks and delamination exist for specimens with similar stiffness degradation. A linear increase of stiffness degradation was observed with the increase of matrix crack density, while the growing trend of stiffness degradation converged with the increase of delamination.","Damage interaction; Digital Image Correlation; Early fatigue damage process; In-situ monitoring","en","journal article","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:752ebddf-82e9-4494-b7a7-d7ebeea5f5d9","http://resolver.tudelft.nl/uuid:752ebddf-82e9-4494-b7a7-d7ebeea5f5d9","anchoring the design process: A framework to make the designerly way of thinking explicit in architectural design education","van Dooren, E.J.G.C. (TU Delft Architectural Engineering)","Asselbergs, M.F. (promotor); van Dorst, M.J. (promotor); Boshuizen, H.P.A. (promotor); Van Merriënboer, J.J.G. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis proposes a framework to address the design process in design education. Building upon the assumption that teachers, being professional designers, do not discuss the design process in the architectural design studio and do not have a vocabulary to do so, five generic elements or anchor points are defined which represent the basic design skills. The validity of the framework and the assumption is tested respectively in interviews with a variety of designers and in observations of dialogues between teachers and students. In the final test the design process is addressed in the design studio: the first experiences show that students’ understanding and self-efficacy may increase.
The five elements enable teachers and students to address the designerly attitude. The way designers reason consist of: (1) experimentation; an experimentation-based way of thinking; how to explore and reflect, (2) the frame of reference; a knowledge-based way of thinking; how to work with common and proven ‘professional’ knowledge, and (3) the guiding theme; a value-based way of thinking; how to take a position in the design process. Next to that, (4) the laboratory is the (visual) language or set of means designers use to think designerly, and (5) the domains are the playing field of the designer, the product aspects s/he should address.","design education; design process; generic elements; making explicit; architectural design","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-299-4","","","","A+BE I Architecture and the Built Environment No 17 (2020)","","","","","Architectural Engineering","","",""
"uuid:28b39202-0d64-4402-b7ae-0683e1ee0374","http://resolver.tudelft.nl/uuid:28b39202-0d64-4402-b7ae-0683e1ee0374","An integrated assessment of safety and efficiency of aircraft maintenance strategies using agent-based modelling and stochastic Petri nets","Lee, J. (TU Delft Air Transport & Operations); Mitici, M.A. (TU Delft Air Transport & Operations)","","2020","Aircraft maintenance is key for safe and efficient aircraft operations. While most studies propose cost-efficient maintenance strategies, the safety and efficiency of these strategies need to be quantified. This paper proposes a formal framework to assess the safety and efficiency of maintenance strategies by means of agent-based modelling, stochastically and dynamically coloured Petri nets, and Monte Carlo simulation. We model an end-to-end aircraft maintenance process, considering several maintenance stakeholders. We apply our framework for aircraft landing gear brakes, and use a Gamma process to model the degradation trends of the brakes. The numerical results show that applying data-driven strategies reduces the number of inspections by 36%, while maintaining the same level of safety as in the case of traditional time-based maintenance strategies. Furthermore, in order to discuss the possibility to substitute all inspections by sensor monitoring, an advanced data-driven strategy using prognostics is considered. Overall, our proposed framework is generic and can readily be applied to assess the safety and efficiency of the maintenance of other aircraft components and maintenance strategies.","Aircraft maintenance; Gamma process; Landing gear brake; Safety; Simulation; Stochastically and dynamically coloured Petri nets","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:11fc3020-2e13-4825-92ca-d4c9f7fd6815","http://resolver.tudelft.nl/uuid:11fc3020-2e13-4825-92ca-d4c9f7fd6815","Rational Design of Afterglow and Storage Phosphors","Lyu, T. (TU Delft RST/Luminescence Materials)","Dorenbos, P. (promotor); Delft University of Technology (degree granting institution)","2020","In this thesis, we have studied two types of charge carrier capturing and detrapping processes: (a) electron capturing and electron liberation; (b) hole
capturing and hole liberation. Both the (a) and (b) processes can be utilized for the rational design of afterglow and storage phosphors in different compounds.","afterglow; storage phosphor; charge carrier trapping processes; trap depth engineering; lanthanides; energy storage; Bi2+; Bi3+","en","doctoral thesis","","978-94-6380-906-1","","","","","","","","","RST/Luminescence Materials","","",""
"uuid:25de1e90-f586-4973-8f04-f2c744609959","http://resolver.tudelft.nl/uuid:25de1e90-f586-4973-8f04-f2c744609959","The effect of stray current on hardening and hardened cement-based materials","Susanto, A. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Koleva, D.A. (copromotor); Delft University of Technology (degree granting institution)","2020","Stray current has become a main concern for many years due to its effect on (reinforced) concrete structures and underground infrastructures. It has been reported that stray current affects not only steel reinforcement embedded in concrete, but can also induce degradation of the cement-based matrix. Stray current causes an increase of temperature in hardening concrete due to Joule heating which will accelerate cement hydration. The accelerated cement hydration results in faster evolution of materials properties (e.g. stiffness and strength) and a faster decrease of the capillary porosity. The microstructural change due to stray current flow will affect transport properties, as well as the service life performance of cement-based materials. In case the concrete is exposed to water, leaching of alkali ions will decrease compressive strength and increase permeability and diffusion coefficient of concrete. Under stray current, leaching of alkali ions in concrete is accelerated which will increase level of structural degradation. Deterioration of concrete due to stray current involves many mechanisms including ion and mass transport, electrical conduction, heat transfer and corresponding occurrence of mechanical stresses. However, the study on the effect of stray current on material properties (e.g. microstructural, mechanical, electrical properties) and longterm performance/durability of cement based materials is still lacking. The aim of this thesis is to investigate the effects of stray current on long-term performance of cementbased materials. The results of this project will contribute to a better understanding on beneficial (positive) and/or detrimental (negative) effects of stray current on cementbased materials, which is a point of significant importance for real practice...","Stray current; Joule heating; cement-based materials; temperature; hydration process; microstructure; diffusion coefficient; insulation; service life","en","doctoral thesis","","978-94-92597-47-2","","","","","","","","","Materials and Environment","","",""
"uuid:8cd1ab25-68bb-4912-b41a-811bb51e3c53","http://resolver.tudelft.nl/uuid:8cd1ab25-68bb-4912-b41a-811bb51e3c53","Automatic detection and characerization of ground occlusions in urban point clouds from mobile laser scanning data","Balado Frías, J. (TU Delft GIS Technologie; University of Vigo); González, E. (University of Vigo); Verbree, E. (TU Delft GIS Technologie); Díaz-Vilarino, L. (TU Delft GIS Technologie; University of Vigo); Lorenzo, H. (University of Vigo)","","2020","Occlusions accompany serious problems that reduce the applicability of numerous algorithms. The aim of this work is to detect and characterize urban ground gaps based on occluding object. The point clouds for input have been acquired with Mobile Laser Scanning and have been previously segmented into ground, buildings and objects, which have been classified. The method generates various raster images according to segmented point cloud elements, and detects gaps within the ground based on their connectivity and the application of the hit-or-miss transform. The method has been tested in four real case studies in the cities of Vigo and Paris, and an accuracy of 99.6% has been obtained in occlusion detection and labelling. Cars caused 80.6% of the occlusions. Each car occluded an average ground area of 11.9 m2. The proposed method facilitates knowing the percentage of occluded ground, and if this would be reduced in successive multi-temporal acquisitions based on mobility characteristics of each object class.","urban environment; point clouds; occlusion detection; image processing; raster; object classification","en","journal article","","","","","","","","","","","GIS Technologie","","",""
"uuid:2b392951-3781-4aed-b093-547c70cc581d","http://resolver.tudelft.nl/uuid:2b392951-3781-4aed-b093-547c70cc581d","Intertidal Flats in Engineered Estuaries: On the Hydrodynamics, Morphodynamics, and Implications for Ecology and System Management","de Vet, P.L.M. (TU Delft Coastal Engineering)","van Prooijen, Bram (promotor); Wang, Zhengbing (promotor); Delft University of Technology (degree granting institution)","2020","Intertidal flats — regions of estuaries that emerge every tide from the water — form unique ecosystems. Benthic communities living in the bed are a valuable food source for wading birds. Salt marshes present on these flats further enhance the biodiversity. Through the damping of waves, intertidal flats also contribute to the safety of the hinterland against flooding. In engineered estuaries, human interventions such as storm surge barriers, navigation channels, dams, and levees affect these ecologically valuable intertidal flats and may even threaten their existence. Therefore, these systems should be managed with care, requiring a thorough understanding of the mechanisms shaping intertidal flats. This dissertation aims to identify and quantify the natural and anthropogenic processes driving hydrodynamics and morphodynamics of intertidal flats, and to reveal the implications for ecology and system management. The Eastern Scheldt and Western Scheldt estuaries (the Netherlands) were selected for this study. These were chosen because of the extensive datasets measured in both estuaries and the different types of human interventions affecting these systems. In the Eastern Scheldt, a storm surge barrier closes during storm conditions and reduces tidal flow velocities inside the estuary at normal conditions. Tidal velocities are also reduced by dams in the branches of this estuary. In the Western Scheldt, sediment is being relocated from too shallow parts of the navigation channel to other parts of the estuary, enabling navigation to economically important harbors. In this dissertation it is shown that it is the aggregated system of natural forces and human interventions that drives the eco-morphological evolution of intertidal flats in estuaries. Intertidal flats respond to local as well as to system-wide changes in sediment availability and hydrodynamics due to human interventions. Even under major human interventions, the natural forces remain relevant. Due to many spatial and temporal scales involved in the eco-morphological response of intertidal flats to changing natural and anthropogenic forces, estuaries require adaptive management strategies.","Intertidal flats; Estuaries; Human interventions; Natural processes; Morphodynamics; Hydrodynamics; Ecology; Numerical modeling; Field measurements","en","doctoral thesis","","978-94-6384-123-8","","","","","","","","","Coastal Engineering","","",""
"uuid:dc48a17a-0cbb-4580-bf43-5c59691283d8","http://resolver.tudelft.nl/uuid:dc48a17a-0cbb-4580-bf43-5c59691283d8","Towards Finite-Time Consensus with Graph Convolutional Neural Networks","Iancu, A. (Student TU Delft); Isufi, E. (TU Delft Multimedia Computing)","","2020","Atrial electrograms are often used to gain understanding on the development of atrial fibrillation (AF). Using such electrograms, cardiologists can reconstruct how the depolarization wave-front propagates across the atrium. Knowing the exact moment at which the depolarization wavefront in the tissue reaches each electrode is an important aspect of such reconstruction. A common way to determine the LAT is based on the steepest deflection (SD) of the individual electrograms. However, the SD annotates each electrogram individually and is expected to be more prone to errors compared to approaches that would employ the data from the surrounding electrodes to estimate the LAT. As electrograms from neighboring electrodes tend to have rather similar morphology up to a delay, we propose in this paper to use the cross-correlation to find the pair-wise relative delays between electrograms. Instead of only using the direct neighbors we consider the array as a graph and involve higher order neighbors as well. Using a least-squares method, the absolute LATs can then be estimated from the calculated pair-wise relative delays. Simulated and clinically recorded electrograms are used to evaluate the proposed approach. From the simulated data it follows that the proposed approach outperforms the SD approach.","Finite-time consensus,; graph convolutions; graph signal processing; graph neural networks","en","conference paper","Eurasip","","","","","","","","","","Multimedia Computing","","",""
"uuid:abd4ea54-a401-4f9d-9a3a-639a8a5b2596","http://resolver.tudelft.nl/uuid:abd4ea54-a401-4f9d-9a3a-639a8a5b2596","State-space based network topology identification","Coutino, Mario (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing); Maehara, T. (RIKEN; Tokyo); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","In this work, we explore the state-space formulation of network processes to recover the underlying network structure (local connections). To do so, we employ subspace techniques borrowed from system identification literature and extend them to the network topology inference problem. This approach provides a unified view of the traditional network control theory and signal processing on networks. In addition, it provides theoretical guarantees for the recovery of the topological structure of a deterministic linear dynamical system from input-output observations even though the input and state evolution networks can differ.","Graph signal processing; Signal processing over networks; State-space models; Topology identification","en","conference paper","Eurasip","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-08-29","","","Signal Processing Systems","","",""
"uuid:ac3d9a76-59bb-4c77-ae81-f75a2813ab5d","http://resolver.tudelft.nl/uuid:ac3d9a76-59bb-4c77-ae81-f75a2813ab5d","Privacy-Preserving Distributed Graph Filtering","Li, Qiongxiu (Aalborg University); Coutino, Mario (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems); Christensen, M. Graesboll (Aalborg University)","","2020","With an increasingly interconnected and digitized world, distributed signal processing and graph signal processing have been proposed to process its big amount of data. However, privacy has become one of the biggest challenges holding back the widespread adoption of these tools for processing sensitive data. As a step towards a solution, we demonstrate the privacypreserving capabilities of variants of the so-called distributed graph filters. Such implementations allow each node to compute a desired linear transformation of the networked data while protecting its own private data. In particular, the proposed approach eliminates the risk of possible privacy abuse by ensuring that the private data is only available to its owner. Moreover, it preserves the distributed implementation and keeps the same communication and computational cost as its non-secure counterparts. Furthermore, we show that this computational model is secure under both passive and eavesdropping adversary models. Finally, its performance is demonstrated by numerical tests and it is shown to be a valid and competitive privacypreserving alternative to traditional distributed optimization techniques.","Distributed computation; Distributed graph filters; Encryption; Graph signal processing; Privacy-preserving","en","conference paper","Eurasip","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-08-29","","","Signal Processing Systems","","",""
"uuid:b938f7ee-a901-47f7-9064-65d54f6b2dbf","http://resolver.tudelft.nl/uuid:b938f7ee-a901-47f7-9064-65d54f6b2dbf","Exact formulas for two interacting particles and applications in particle systems with duality","Carinci, G. (TU Delft Applied Probability); Giardina', C. (Università Degli Studi di Modena e Reggio Emilia); Redig, F.H.J. (TU Delft Applied Probability)","","2020","We consider two particles performing continuous-time nearest neighbor random walk on Z and interacting with each other when they are at neighboring positions. The interaction is either repulsive (partial exclusion process) or attractive (inclusion process). We provide an exact formula for the Laplace-Fourier transform of the transition probabilities of the two-particle dynamics. From this we derive a general scaling limit result, which shows that the possible scaling limits are coalescing Brownian motions, reflected Brownian motions and sticky Brownian motions. In particle systems with duality, the solution of the dynamics of two dual particles provides relevant information. We apply the exact formula to the the symmetric inclusion process, that is self-dual, in the condensation regime. We thus obtain two results. First, by computing the time-dependent covariance of the particle occupation number at two lattice sites we characterise the time-dependent coarsening in infinite volume when the process is started from a homogeneous product measure. Second, we identify the limiting variance of the density field in the diffusive scaling limit, relating it to the local time of sticky Brownian motion.","Condensation; Duality; Inclusion process; Interacting particle systems","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:3cef9da8-d432-4d6a-8805-4c094440bd56","http://resolver.tudelft.nl/uuid:3cef9da8-d432-4d6a-8805-4c094440bd56","Replacement optimisation for public infrastructure assets: Quantitative optimisation modelling taking typical public infrastructure related features into account","van den Boomen, M. (TU Delft Integral Design & Management)","Bakker, H.L.M. (promotor); Kapelan, Z. (promotor); Delft University of Technology (degree granting institution)","2020","Ageing infrastructures and shortage of financing induce the need for optimising public infrastructure replacements. From an economic perspective, classical net present value comparison is traditionally the method of choice to decide on investments and replacements. The current research observes that typical infrastructure related features make the classical net present value comparison less suitable in its application for optimising infrastructure replacements. Especially the low discount rate of public sector organisations, price increases and price uncertainty contribute to this phenomenon in which the application of classical net present value comparison leads to suboptimal timing and costs. This observation led to the development of six dedicated replacement optimisation models for common types of infrastructure replacement challenges. A decision support guideline is provided to assist in selecting an appropriate model based on the sequence of intervention strategies, the development of forecasted cash flows and whether uncertainty is involved. The quantitative replacement optimisation models function as blueprints for similar challenges and support a wider decision-making context.","replacement; optimisation; public infrastructure; reliability; real options; uncertainty; Markov decision process","en","doctoral thesis","","978‐94‐028‐1965‐6","","","","","","2020-03-25","","","Integral Design & Management","","",""
"uuid:e2622fd3-54fe-4cb1-af3e-36f34123eed6","http://resolver.tudelft.nl/uuid:e2622fd3-54fe-4cb1-af3e-36f34123eed6","Early dengue outbreak detection modeling based on dengue incidences in Singapore during 2012 to 2017","Chen, P. (TU Delft Statistics); Fu, Xiuju (Institute of High Performance Computing); Ma, Stefan (Ministry of Health); Xu, Hai Yan (Institute of High Performance Computing); Zhang, Wanbing (Institute of High Performance Computing); Xiao, Gaoxi (Nanyang Technological University); Siow Mong Goh, Rick (Institute of High Performance Computing); Xu, George (Institute of High Performance Computing); Ching Ng, Lee (National Environment Agency)","","2020","Dengue has been as an endemic with year-round presence in Singapore. In the recent years 2013, 2014, and 2016, there were several severe dengue outbreaks, posing serious threat to the public health. To proactively control and mitigate the disease spread, early warnings of dengue outbreaks, at which there are rapid and large-scale spread of dengue incidences, are extremely helpful. In this study, a two-step framework is proposed to predict dengue outbreaks and it is evaluated based on the dengue incidences in Singapore during 2012 to 2017. First, a generalized additive model (GAM) is trained based on the weekly dengue incidence data during 2006 to 2011. The proposed GAM is a one-week-ahead forecasting model, and it inherently accounts for the possible correlation among the historical incidence data, making the residuals approximately normally distributed. Then, an exponentially weighted moving average (EWMA) control chart is proposed to sequentially monitor the weekly residuals during 2012 to 2017. Our investigation shows that the proposed two-step framework is able to give persistent signals at the early stage of the outbreaks in 2013, 2014, and 2016, which provides early alerts of outbreaks and wins time for the early interventions and the preparation of necessary public health resources. In addition, extensive simulations show that the proposed method is comparable to other potential outbreak detection methods and it is robust to the underlying data-generating mechanisms.","EWMA control chart; generalized additive model; public health surveillance; statistical process control","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:bb83b81e-4877-4ff5-8de5-0486f7e845fa","http://resolver.tudelft.nl/uuid:bb83b81e-4877-4ff5-8de5-0486f7e845fa","Intimate contact development during laser assisted fiber placement: Microstructure and effect of process parameters","Çelik, O. (TU Delft Structural Integrity & Composites); Peeters, D.M.J. (TU Delft Aerospace Manufacturing Technologies); Dransfeld, C.A. (TU Delft Aerospace Manufacturing Technologies); Teuwen, Julie J.E. (TU Delft Aerospace Manufacturing Technologies)","","2020","Intimate contact development under LAFP-specific thermal and mechanical boundary conditions/interactions and the effect of process parameters are investigated. One-layer, unidirectional strips of CF/PEKK material were placed with different process parameters on a flat tool surface to create different intimate contact conditions. The concept of effective intimate contact, which is based on the resin content at the surface, is introduced and a methodology to measure it from surface micrographs is provided. Degree of effective intimate contact measured from the samples was compared with the existing intimate contact models. The temperature history in the compaction zone was estimated with a finite element model and pressure sensitive films were used to determine the compaction pressure. It is shown that in addition to the squeeze flow mechanism, which is the base for the current intimate contact models, through-thickness percolation flow of the resin needs to be considered to explain the effective intimate contact development.","A. Polymer-matrix composites (PMCs); B. Microstructures; C. Process modeling; E. Automated fiber placement (AFP)","en","journal article","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:4d8e5e09-0c94-458c-ae3b-16c980011e45","http://resolver.tudelft.nl/uuid:4d8e5e09-0c94-458c-ae3b-16c980011e45","Distributed coordination of deferrable loads: A real-time market with self-fulfilling forecasts","Abdelghany, H.A.M.F. (TU Delft Intelligent Electrical Power Grids; Arab Academy for Science, Technology and Maritime Transport); Tindemans, Simon H. (TU Delft Intelligent Electrical Power Grids); de Weerdt, M.M. (TU Delft Algorithmics); la Poutré, J.A. (TU Delft Intelligent Electrical Power Grids; Centrum Wiskunde & Informatica (CWI))","","2020","Increased uptake of variable renewable generation and further electrification of energy demand necessitate efficient coordination of flexible demand resources to make most efficient use of power system assets. Flexible electrical loads are typically small, numerous, heterogeneous and owned by self-interested agents. Considering the multi-temporal nature of flexibility and the uncertainty involved, scheduling them is a complex task. This paper proposes a forecast-mediated real-time market-based control approach (F-MBC) for cost minimizing coordination of uninterruptible time-shiftable (i.e. deferrable) loads. F-MBC is scalable, privacy preserving, and useable by device agents with small computational power. Moreover, F-MBC is proven to overcome the challenge of mutually conflicting decisions from equivalent devices. Simulations in a simplified but challenging case study show that F-MBC produces near-optimal behaviour over multiple time-steps.","Market-based control; Markov decision process; Flexibility; Demand response; Distributed energy resources","en","journal article","","","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:ad05997e-bfe7-49d3-aa9f-4fcd6772371c","http://resolver.tudelft.nl/uuid:ad05997e-bfe7-49d3-aa9f-4fcd6772371c","Dop-NET: A Micro-Doppler Radar Data Challenge","Ritchie, M. (University College London (UCL)); Capraru, R. (University College London (UCL)); Fioranelli, F. (TU Delft Microwave Sensing, Signals & Systems)","","2020","Radar sensors have a new growing application area of dynamic hand gesture recognition. Traditionally radar systems are considered to be very large, complex and focused on detecting targets at long ranges. With modern electronics and signal processing it is now possible to create small compact RF sensors that can sense subtle movements over short ranges. For such applications, access to comprehensive databases of signatures is critical to enable the effective training of classification algorithms and to provide a common baseline for benchmarking purposes. This Letter introduces the Dop-NET radar micro-Doppler database and data challenge to the radar and machine learning communities. Dop-NET is a database of radar micro-Doppler signatures that are shareable and distributed with the purpose of improving micro-Doppler classification techniques. A continuous wave 24 GHz radar module is used to capture the first contributions to the Dop-NET database and classification results based on discriminating these hand gestures as shown.","learning (artificial intelligence); radar signal processing; CW radar; Doppler radar; signal classification; signal processing; compact RF sensors; Dop-NET radar microDoppler database; machine learning communities; radar microDoppler signatures; microDoppler classification techniques; Dop-NET database; microDoppler radar data challenge; radar sensors; dynamic hand gesture recognition; continuous wave radar module; frequency 24; 0 GHz","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-11-01","","","Microwave Sensing, Signals & Systems","","",""
"uuid:d5b4a20d-bd1d-4eba-a41f-50fdd21cac7c","http://resolver.tudelft.nl/uuid:d5b4a20d-bd1d-4eba-a41f-50fdd21cac7c","Graph-time spectral analysis for atrial fibrillation","Sun, M. (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing); de Groot, N.M.S. (TU Delft Biomechanical Engineering; Erasmus MC); Hendriks, R.C. (TU Delft Signal Processing Systems)","","2020","Atrial fibrillation is a clinical arrhythmia with multifactorial mechanisms still unresolved. Time-frequency analysis of epicardial electrograms has been investigated to study atrial fibrillation. However, deeper understanding can be achieved by incorporating the spatial dimension. Unfortunately, the physical models describing the spatial relations of atrial fibrillation signals are complex and non-linear; hence, conventional signal processing techniques to study electrograms in the joint space, time, and frequency domain are less suitable. In this study, we wish to put forward a radically different approach to analyze atrial fibrillation with a higher-level model. This approach relies on graph signal processing to represent the spatial relations between epicardial electrograms. To capture the frequency content along both the time and graph domain, we propose the joint graph and short-time Fourier transform. The latter allows us to analyze the spatial variability of the electrogram temporal frequencies. With this technique, we found the spatial variation of the atrial electrograms decreases during atrial fibrillation since the high temporal frequencies of the atrial waves reduce. The proposed analysis further confirms that the ventricular activity is smoother over the atrial area compared with the atrial activity. Besides using the proposed graph-time analysis to conduct a first study on atrial fibrillation, we demonstrate its potential by applying it to the cancellation of ventricular activity from the atrial electrograms. Experimental results on simulated and real data further corroborate our findings in this atrial fibrillation study.","Atrial activity extraction; Atrial fibrillation; Graph signal processing; Graph-time signal processing; Spectral analysis","en","journal article","","","","","","","","2022-03-06","","Biomechanical Engineering","Signal Processing Systems","","",""
"uuid:6ee6ec6d-9ffc-461f-9278-da38a7409d01","http://resolver.tudelft.nl/uuid:6ee6ec6d-9ffc-461f-9278-da38a7409d01","Conversations between the Earth and Atmosphere: A study on the seismo-acoustic wavefield","Averbuch, G. (TU Delft Applied Geophysics and Petrophysics)","Evers, L.G. (promotor); Delft University of Technology (degree granting institution)","2020","The study of seismo-acoustic events is by no means new. Observations of earthquake-induced infrasound signals are dated back to the 1950s. However, the relative recent deployment of the International Monitoring System (IMS) by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) provided world coverage for such signals. The continuous monitoring led to many detections of seismo-acoustic events and brought interest in this field back. Driven by unique and complex seismo-acoustic observations, this study uses array processing techniques to analyze the recorded data, back-projections to determine the origins of the infrasonic signals and numerical models to simulate infrasound wave propagation in coupled geophysical systems. The North Korean underground nuclear tests in 2013, 2016, and 2017 generated atmospheric infrasound. Detections were made in the Russian Federation (I45RU) and Japan (I30JP) IMS microbarometers arrays. These detections formed the basis of the presented empirical studies on the seismo-acoustic wavefield. It is shown that atmospheric variability can explain only part of the observations; therefore, changes in the source characteristics must be considered. Moreover, back-projections show that infrasound radiation is not confined to the epicentral region. More distant regions are found to be consistent with locations of topography, sedimentary basins, and underwater evanescent sources. A seismo-acoustic numerical model is used to simulate long-range infrasound propagation from underwater and underground sources. The Fast Field Program (FFP) is used to model the seismo-acoustic coupling between the solid Earth, the ocean, and the atmosphere under the variation of source and media parameters. A thorough analysis of the seismo-acoustic coupling mechanisms reveals that evanescent wave coupling and leaky surface waves are the main energy contributors to long-range infrasound propagation. Moreover, it is found that source depth affects the relative amplitude of the tropospheric and stratospheric phases. This characteristic is further employed in an infrasound based inversion for the source parameters. A Bayesian inversion scheme is tested on synthetic data under the variations of the number of stations, the signals frequency band, and the signal-to-noise ratio (SNR). Also, an ensemble of realistic perturbed atmospheric profiles is used to investigate the effect of atmospheric uncertainties on the inversion results. Results show that variations in the number of stations, their positions, and SNRs, lead to source strength estimations with uncertainties up to 50%. However, all of the estimated depths were within a 100 m range from the original source depth.","infrasound; seismo-acoustics; wave propagation; array processing","en","doctoral thesis","","978-94-6384-120-7","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:1a21c6a6-0412-4ea1-b8b4-c1c84c19d68e","http://resolver.tudelft.nl/uuid:1a21c6a6-0412-4ea1-b8b4-c1c84c19d68e","Process stratigraphy: from numerical simulation to lithology prediction","Karamitopoulos, P. (TU Delft Hydraulic Structures and Flood Risk; TU Delft Applied Geology)","Martinius, A.W. (promotor); Weltje, G.J. (promotor); Donselaar, M.E. (copromotor); Delft University of Technology (degree granting institution)","2020","Process-based stratigraphic models provide attractive tools to simulate sedimentary system dynamics spanning a wide range of spatial and temporal scales and segments of the sediment routing system while allowing full access to the model responses, i.e. the spatial distribution of lithologies as a function of the intervening processes and environmental conditions at the time of deposition. Apart from improving our understanding regarding the evolution of sedimentary systems under pre-specified allogenic forcing mechanisms and intrinsic dynamics, process-based stratigraphic models can be used to improve basin-fill history reconstructions and increase the geological credibility of static reservoir models by integrating regional information to local-scale heterogeneities. The realism and predictive power of the model responses and geological model realizations may be quantitatively assessed by comparison with the geophysical/geological data available.","chronosome; avulsions; bifurcation intensity; depositional connectivity; process-based geological modelling","en","doctoral thesis","","978-94-6402-097-7","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:f8e815ab-1c4c-4288-b47e-c122d76a1bdd","http://resolver.tudelft.nl/uuid:f8e815ab-1c4c-4288-b47e-c122d76a1bdd","Rational Chebyshev Graph Filters","Rimleanscaia, Oxana (University of Perugia); Isufi, E. (TU Delft Multimedia Computing)","Matthews, Michael B. (editor)","2020","This paper proposes rational Chebyshev graph filters to approximate step graph spectral responses with arbitrary precision, which are of interest in graph filter banks and spectral clustering. The proposed method relies on the well-known Chebyshev filters of the first kind and on a domain transform of the angular frequencies to the graph frequencies. This approach identifies in closed-form the filter coefficients, hence it avoids the costs of solving a nonlinear problem. Rational Chebyshev graph filters improve the control on the ripples in the pass- and stop-band and on the transition decay. Numerical experiments show the proposed approach approximates better ideal step responses than competing alternatives and reaches the performance of the ideal filters in compressive spectral clustering.","Graph filters; graph signal processing; spectral clustering","en","conference paper","IEEE","","","","","","","","","","Multimedia Computing","","",""
"uuid:a7d16593-3568-44c9-92b6-25e5fa1ff94b","http://resolver.tudelft.nl/uuid:a7d16593-3568-44c9-92b6-25e5fa1ff94b","Peface","Pokojski, Jerzy (Warsaw University of Technology); Gil, MacIej (Oracle); Newnes, Linda (University of Bath); Stjepandić, Josip (PROSTEP AG); Wognum, Nel (TU Delft Air Transport & Operations)","Pokojski, Jerzy (editor); Gil, Maciej (editor); Newnes, Linda (editor); Stjepandic, Josip (editor); Wognum, Nel (editor)","2020","br","Natural language processing; Predictive Maintenance; Proportional Hazard Models","en","conference paper","IOS Press","","","","","","","","","","Air Transport & Operations","","",""
"uuid:e4a4a222-dd0c-4603-96ef-f4c78ec378d5","http://resolver.tudelft.nl/uuid:e4a4a222-dd0c-4603-96ef-f4c78ec378d5","Infrastructure maintenance and replacement optimization under multiple uncertainties and managerial flexibility","van den Boomen, M. (TU Delft Integral Design & Management); Spaan, M.T.J. (TU Delft Algorithmics); Shang, Y. (TU Delft Integral Design & Management); Wolfert, A.R.M. (TU Delft Integral Design & Management)","","2020","Infrastructure maintenance and replacement decisions are subject to uncertainties such as regular asset degradation, structural failure, and price uncertainty. In the engineering domain, Markov Decision Processes (MDPs) typically focus on uncertainties regarding asset degradation and structural failure. While the literature in the engineering domain stresses the importance of addressing price uncertainties, it does not substantiate the observations of such uncertainties through optimization modeling. By contrast, real option analyses (ROAs) that originate from the financial domain address price uncertainties but generally disregard asset degradation and structural failure. Accordingly, this piece of current research brings both domains closer together and proposes an optimization approach that incorporates the flexibility to choose between multiple successive intervention strategies, regular asset degradation, structural failure and multiple price uncertainties. A practical result of the current research is a realistic approach to optimization modeling in which state space reduction is achieved by combining prices into portfolios. The current research obtains transition probabilities from existing price data. This approach is demonstrated using a case study of a water authority in the Netherlands and confirms the premise that price fluctuations may influence short-term maintenance and replacement decisions.","Maintenance; Markov Decision Process; optimization; real options analysis; replacement","en","journal article","","","","","","","","","","","Integral Design & Management","","",""
"uuid:b1ed66cf-4de5-4d90-b964-29ae7c362b3e","http://resolver.tudelft.nl/uuid:b1ed66cf-4de5-4d90-b964-29ae7c362b3e","A modelling based study on the integration of 10 MWth indirect torrefied biomass gasification, methanol and power production","Del Grosso, M. (TU Delft Large Scale Energy Storage); Sridharan, Balaji (Rijksuniversiteit Groningen); Tsekos, C. (TU Delft Large Scale Energy Storage); Klein, S.A. (TU Delft Energy Technology); de Jong, W. (TU Delft Large Scale Energy Storage; Rijksuniversiteit Groningen)","","2020","This work is focused on the process system modelling of an indirectly heated gasifier (10 MWth) using torrefied wood as feedstock and its integration with methanol and power production using Aspen Plus®. The modelling of the gasification process along with the obtained reaction kinetics were validated with experimental data found in literature. Different processing steps such as gasification, gas cleaning and upgrading, methanol synthesis and energy conversion, were modelled and their performance was optimized through a series of sensitivity studies. The results obtained were then used to investigate the effect of different technologies and the variation of operational parameters on the overall process performance. Three cases were examined: “syngas production” (case 1), “methanol production” (case 2), and “power production” (IGCC) (case 3). Case 1 and case 2 were simulated using sand and dolomite as bed materials respectively, in order to study the incorporation of Absorption Enhanced Reforming (AER) on the syngas and methanol production efficiency. For case 3 the simulation was performed for two different configurations: a conventional Integrated Gasification Combined Cycle (IGCC) and an innovative Inverted Brayton Cycle (IBC) turbine system. Dolomite was used as the bed material for both configurations. For case 1, an increase of 5% in hydrogen yield in the product gas when AER is applied was observed. For case 2, higer values of Cold Gas Efficiency and Net Efficiency (34% and 60% instead of 33% and 55%, respectively) and a slightly lower value of Carbon Conversion (96% instead of 100%) were obtained when AER was employed. Gasification temperature was lowered by 110 °C in this scenario. For case 3, a lower value of Net Efficiency was obtained when IBC was considered (43% instead of 47%), while a value of 60% was obtained for methanol production with AE. Moreover, the results of case 3, showed that the latent heat in the hot syngas is best utilised when IBC is considered. The developed model accurately predicted the composition of the produced gas and the operational conditions of all the identified blocks within the methanol synthesis and power production processes. This way the use of this model as a generic tool to compare the utilization of different technologies on the performance of the overall process was validated.","Absorption enhanced reforming; Allothermal gasification; Biomethanol; Integrated gasification combined cycle systems; Process system modelling; Tar removal","en","journal article","","","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:5b197d0c-8132-4156-866f-d31bf22f9a05","http://resolver.tudelft.nl/uuid:5b197d0c-8132-4156-866f-d31bf22f9a05","Advancing ecohydrology in the 21st century: A convergence of opportunities","Guswa, Andrew J. (Smith College); Tetzlaff, Doerthe (Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB); Humboldt-Universitat zu Berlin); Selker, John S. (Oregon State University); Carlyle-Moses, Darryl E. (Thompson Rivers University); Boyer, Elizabeth W. (Pennsylvania State University); Bruen, Michael (University College Dublin); Cayuela, Carles (Institute of Environmental Assessment and Water Research (IDAEA-CSIC)); Creed, Irena F. (University of Saskatchewan); van de Giesen, N.C. (TU Delft Water Resources); Grasso, Domenico (University of Michigan-Dearborn)","","2020","Nature-based solutions for water-resource challenges require advances in the science of ecohydrology. Current understanding is limited by a shortage of observations and theories that can further our capability to synthesize complex processes across scales ranging from submillimetres to tens of kilometres. Recent developments in environmental sensing, data, and modelling have the potential to drive rapid improvements in ecohydrological understanding. After briefly reviewing advances in sensor technologies, this paper highlights how improved measurements and modelling can be applied to enhance understanding of the following ecohydrological examples: interception and canopy processes, root uptake and critical zone processes, and up-scaled effects of land use on streamflow. Novel and improved sensors will enable new questions and experiments, while machine learning and empirical methods provide additional opportunities to advance science. The synergy resulting from the convergence of these parallel developments will provide new insight into ecohydrological processes and thereby help identify nature-based solutions to address water-resource challenges in the 21st century.","critical zone processes; environmental sensing; interception; land use; machine learning; measurement; modelling; streamflow","en","review","","","","","","","","","","","Water Resources","","",""
"uuid:948ba82d-7cb7-4b6d-96ad-733298b0917a","http://resolver.tudelft.nl/uuid:948ba82d-7cb7-4b6d-96ad-733298b0917a","Performance improvements during mineral processing using material fingerprints derived from machine learning—A conceptual framework","van Duijvenbode, J.R. (TU Delft Resource Engineering); Buxton, M.W.N. (TU Delft Resource Engineering); Soleymani Shishvan, M. (TU Delft Resource Engineering)","","2020","Material attributes (e.g., chemical composition, mineralogy, texture) are identified as the causative source of variations in the behaviour of mineral processing. That makes them suitable to act as key characteristics to characterise and classify material. Therefore, vast quantities of collected data describing material attributes could help to forecast the behaviour of mineral processing. This paper proposes a conceptual framework that creates a data-driven link between ore and the processing behaviour through the creation of material “fingerprints”. A fingerprint is a machine learning-based classification of measured material attributes compared to the range of attributes found within the mine’s mineral reserves. The outcome of the classification acts as a label for a machine learning model and contains relevant information, which may identify the root cause of measured differences in processing behaviour. Therefore, this class label can forecast the associated behaviour of mineral processing. Furthermore, insight is given into the confidence of available data originating from different analytical techniques. Taken together, this enhances the understanding of how differences in geology impact metallurgical plant performance. Targeted measurements at low-confidence unit processes and for specific attributes would upgrade the confidence in fingerprints and capabilities to predict plant performance.","Behavioural prediction; Data confidence; Machine learning; Material fingerprints; Mineral processing; Mining","en","journal article","","","","","","","","","","","Resource Engineering","","",""
"uuid:48433c48-8c0b-46a1-bd0f-d9472af14010","http://resolver.tudelft.nl/uuid:48433c48-8c0b-46a1-bd0f-d9472af14010","Evaluating alluvial stratigraphic response to cyclic and non-cyclic upstream forcing through process-based alluvial architecture modelling","Wang, Y. (TU Delft Applied Geology); Storms, J.E.A. (TU Delft Applied Geology); Martinius, A.W. (TU Delft Applied Geology; Statoil ASA); Karssenberg, Derek (Universiteit Utrecht); Abels, H.A. (TU Delft Applied Geology)","","2020","Formation of alluvial stratigraphy is controlled by autogenic processes that mix their imprints with allogenic forcing. In some alluvial successions, sedimentary cycles have been linked to astronomically-driven, cyclic climate changes. However, it remains challenging to define how such cyclic allogenic forcing leads to sedimentary cycles when it continuously occurs in concert with autogenic forcing. Accordingly, we evaluate the impact of cyclic and non-cyclic upstream forcing on alluvial stratigraphy through a process-based alluvial architecture model, the Karssenberg and Bridge (2008) model (KB08). The KB08 model depicts diffusion-based sediment transport, erosion and deposition within a network of channel belts and associated floodplains, with river avulsion dependent on lateral floodplain gradient, flood magnitude and frequency, and stochastic components. We find cyclic alluvial stratigraphic patterns to occur when there is cyclicity in the ratio of sediment supply over water discharge (Qs/Qw ratio), in the precondition that the allogenic forcing has sufficiently large amplitudes and long, but not very long, wavelengths, depending on inherent properties of the modelled basin (e.g. basin subsidence, size, and slope). Each alluvial stratigraphic cycle consists of two phases: an aggradation phase characterized by rapid sedimentation due to frequent channel shifting and a non-deposition phase characterized by channel belt stability and, depending on Qs/Qw amplitudes, incision. Larger Qs/Qw ratio amplitudes contribute to weaker downstream signal shredding by stochastic components in the model. Floodplain topographic differences are found to be compensated by autogenic dynamics at certain compensational timescales in fully autogenic runs, while the presence of allogenic forcing clearly impacts the compensational stacking patterns.","alluvial stratigraphy; compensational timescale; cyclicity; process-based alluvial architecture modelling; signal preservation and shredding","en","journal article","","","","","","","","","","","Applied Geology","","",""
"uuid:cebeb915-766a-4e28-b2b8-eecc2214d6ee","http://resolver.tudelft.nl/uuid:cebeb915-766a-4e28-b2b8-eecc2214d6ee","Road Infrastructure Requirements for Improved Performance of Lane Assistance Systems","Reddy, N. (TU Delft Transport and Planning); Farah, H. (TU Delft Transport and Planning); Dekker, Thijs (Provincie Noord-Holland); Huang, Yilin (TU Delft System Engineering); van Arem, B. (TU Delft Transport and Planning)","","2020","There is a pressing need for road authorities to take a proactive role in the deployment of automated vehicles on the existing road network. This requires a comprehensive understanding of the road infrastructure requirements that would lead to safe operation of automated vehicles. In this context, a field test with Lane Departure Warning and Lane Keeping Systems-enabled vehicles was conducted in the province of North Holland, The Netherlands. The performance of these automated systems was evaluated using performance indicators such as Mean Lateral Position and Standard Deviation of Lane Position. In this study, the Systems Theoretic Accident Modelling and Processes (STAMP) model was adopted to understand the relationships between the various components of the “Road System”, which in this study include the road authority, the automated vehicle system, elements of the road infrastructure, and weather conditions. Empirical data from the experiment is used to estimate the relationships between the different components, followed by the assessment of their impact on the performance of the automated vehicles. It was found that visibility conditions have a significant effect on detection performance, which worsens in rainy conditions especially under streetlights. It has been also observed that there is a significant difference in Lane Position between Left Curves and Straight sections, and between lane widths less than 250 cms and those that have larger widths. These findings are combined with the results from the STAMP analysis to formulate a set of road infrastructure requirements that would lead to safe performance of Lane Assistance Systems.","Automated Driving; Lane Assistance Systems; Systems Theory; Systems Theoretic Accident Modeling and Processes (STAMP); Infrastructure effects; Road design","en","poster","","","","","","","","","","","Transport and Planning","","",""
"uuid:aef87d8c-c476-4ffa-bd4d-8b48e68e6e93","http://resolver.tudelft.nl/uuid:aef87d8c-c476-4ffa-bd4d-8b48e68e6e93","A computationally efficient thermal model for selective laser melting","Yang, Y. (Sun Yat-sen University); van Keulen, A. (TU Delft Computational Design and Mechanics); Ayas, C. (TU Delft Computational Design and Mechanics)","","2020","Selective laser melting (SLM) is a widely used additive manufacturing method for building metal parts in a layer-by-layer manner thereby imposing almost no limitations on the geometrical layout of the part. The SLM process has a crucial impact on the microstructure, strength, surface quality and even the shape of the part, all of which depend on the thermal history of material points within the part. In this paper, we present a computationally tractable thermal model for the SLM process which accounts for individual laser scanning vectors. First, a closed form solution of a line heat source is calculated to represent the laser scanning vectors in a semi-infinite space. The thermal boundary conditions are accounted for by a complimentary correction field, which is computed numerically. The total temperature field is obtained by the superposition of the two. The proposed semi-analytical model can be used to simulate manufacturing geometrically complex parts and allows spatial discretisation to be much coarser than the characteristic length scale of the process: laser spot size, except in the vicinity of boundaries. The underlying assumption of linearity of the heat equation in the proposed model is justified by comparisons with a fully non-linear model and experiments. The accuracy of the proposed boundary correction scheme is demonstrated by a dedicated numerical example on a simple cubic part. The influence of the part design and scanning strategy on the temperature transients are subsequently analysed on a geometrically complex part. The results show that overhanging features of a part obstruct the heat flow towards the base-plate thereby creating local overheating which in turn decrease local cooling rate. Finally, a real SLM process for a part with an overhanging feature is modelled for validation of the proposed model. Reasonable agreement between the model predictions and the experimentally measured values can be observed.","Powder bed fusion; Process modelling; Thermal modelling; Semi-analytical model; Superposition","en","journal article","","","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:dee3bd33-466b-4c42-9110-704abe4c0c60","http://resolver.tudelft.nl/uuid:dee3bd33-466b-4c42-9110-704abe4c0c60","A Novel Defect Diagnosis Method for Kyropoulos Process Based Sapphire Growth","Zhang, Wei (Taiyuan University of Technology); Qiao, Tiezhu (Taiyuan University of Technology); Pang, Y. (TU Delft Transport Engineering and Logistics); Yang, Yi (Taiyuan University of Technology); Chen, Hong (Shanxi Zhongjujingke Semiconductor Co.); Hao, Guirong (Shanxi Zhongjujingke Semiconductor Co.)","","2020","When sapphire crystal is prepared with Kyropoulos method, the necking-down growth process is a key stage. Sapphire growth defect is a big problem in this stage. However, diagnosing growth defects is subject to the interference of workers subjectivity and accuracy always goes down. To address the problem, a novel defect diagnosis method is proposed for necking-down growth process in this paper. Industrial CCD sensors replace eyes of skilled workers to observe in this method. A new Defect-Diagnosing Siamese network (DDSN) is used in this method. We use Siamese architecture to learn similarity through pairs of images. We use the deep separable convolution (DSC) into the DDSN to optimize running speed and model size. In experiment, dataset is acquired by industrial CCD sensors in the necking-down growth process. The accuracy of defect diagnosis can reach up to 94.5%. The method significantly improves the traditional way.","CCD sensor; Defect-Diagnosing Siamese network; Necking-down process; Sapphire Growth Defects","en","journal article","","","","","","Accepted Author Manuscript","","","","","Transport Engineering and Logistics","","",""
"uuid:07b0d80e-4516-42fc-9f23-be4123b8ae98","http://resolver.tudelft.nl/uuid:07b0d80e-4516-42fc-9f23-be4123b8ae98","Accelerating short range MIMO imaging with optimized Fourier processing","Fromentèze, Thomas (University of Limoges); Yurduseven, Okan (Queen's University Belfast); Berland, Fabien (University of Limoges); Decroze, Cyril (University of Limoges); Smith, David R. (Duke University); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2020","In this paper, we describe the recent development of new algorithms applied to short-range radar imaging. Facing the limitations of classical backpropagation algorithms, the use of techniques based on Fast Fourier Transforms has led to substantial image computation accelerations, especially for Multiple-Input Multiple-Output systems. The necessary spatial interpolation and zero-padding steps are still particularly limiting in this context, so it is proposed to replace it by a more efficient matrix technique, showing improvements in memory consumption, image computation speed and reconstruction quality.","Fourier processing; Microwave imaging; Millimeter wave imaging; MIMO radar; Omega-k algorithm","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-10-23","","","Microwave Sensing, Signals & Systems","","",""
"uuid:fc9cf4d6-82fe-402e-b866-1029832c5a77","http://resolver.tudelft.nl/uuid:fc9cf4d6-82fe-402e-b866-1029832c5a77","Urban Commoning and Architectural Situated Knowledge: The Architects’ Role in the Transformation of the NDSM Ship Wharf, Amsterdam","Havik, K.M. (TU Delft Situated Architecture); Pllumbi, Dorina (TU Delft Situated Architecture)","","2020","This article discusses the collaborative processes behind the redevelopment of the Dutch state heritage ship wharf NDSM in Amsterdam as a case of urban commoning that took place around the year 2000 – before the term became commonly used in urban studies. It explores how the former shipwharf was transformed into an “incubator”: a creative hub with artist studios, theater spaces, a skate park and other facilities for cultural production. In this article, we specifically investigate the role of architects in this context. Unfolding the process reveals the emergence of the figure of the participant-architect who participates in the shared authorship, within a collective situated knowledge. This knowledge is simultaneously produced in the place and productive of the place.","NDSM Amsterdam; architectural knowledge; collective; process; urban commoning; users","en","journal article","","","","","","","","","","","Situated Architecture","","",""
"uuid:cf6ce406-33a6-4f61-bc14-e2237693fc6d","http://resolver.tudelft.nl/uuid:cf6ce406-33a6-4f61-bc14-e2237693fc6d","Dispersion and Nonlinearity Identification for Single-Mode Fibers Using the Nonlinear Fourier Transform","de Koster, P.B.J. (TU Delft Team Sander Wahls); Wahls, S. (TU Delft Team Sander Wahls)","","2020","Efficient fiber-optic communication requires precise knowledge of the fiber coefficients, but these often change over time due to factors such as aging or bending. We propose a novel algorithm that identifies the average second-order dispersion and Kerr nonlinearity coefficient of a fiber, without employing any special training signals. Instead, ordinary input and output data recorded during normal operation is used. To the best of our knowledge, this is the first such algorithm. The algorithm is based on the nonlinear Fourier spectrum of the signal, which is known to evolve trivially as the signal propagates through an idealized model of the fiber. The algorithm varies the values of the fiber coefficients until the corresponding nonlinear Fourier spectrum at transmitter and receiver match optimally. We test the algorithm on simulated transmission data over a 1600 km link, and accurately identify the fiber coefficients. The identification algorithm is in particular well suited for providing a fiber model for nonlinear Fourier transform-based communication.","Chromatic dispersion; digital signal processing; fiber identification; fiber-optic communications; Kerr nonlinear effect; nonlinear Fourier transform","en","journal article","","","","","","Accepted Author Manuscript","","","","","Team Sander Wahls","","",""
"uuid:053489ef-776b-455c-9715-f5e6ff53c3c3","http://resolver.tudelft.nl/uuid:053489ef-776b-455c-9715-f5e6ff53c3c3","Three-dimensional Marchenko internal multiple attenuation on narrow azimuth streamer data of the Santos Basin, Brazil","Staring, M. (TU Delft Applied Geophysics and Petrophysics); Wapenaar, C.P.A. (TU Delft ImPhys/Medical Imaging; TU Delft Applied Geophysics and Petrophysics)","","2020","In recent years, a variety of Marchenko methods for the attenuation of internal multiples has been developed. These methods have been extensively tested on two-dimensional synthetic data and applied to two-dimensional field data, but only little is known about their behaviour on three-dimensional synthetic data and three-dimensional field data. Particularly, it is not known whether Marchenko methods are sufficiently robust for sparse acquisition geometries that are found in practice. Therefore, we start by performing a series of synthetic tests to identify the key acquisition parameters and limitations that affect the result of three-dimensional Marchenko internal multiple prediction and subtraction using an adaptive double-focusing method. Based on these tests, we define an interpolation strategy and use it for the field data application. Starting from a wide azimuth dense grid of sources and receivers, a series of decimation tests are performed until a narrow azimuth streamer geometry remains. We evaluate the effect of the removal of sail lines, near offsets, far offsets and outer cables on the result of the adaptive double-focusing method. These tests show that our method is most sensitive to the limited aperture in the crossline direction and the sail line spacing when applying it to synthetic narrow azimuth streamer data. The sail line spacing can be interpolated, but the aperture in the crossline direction is a limitation of the acquisition. Next, we apply the adaptive Marchenko double-focusing method to the narrow azimuth streamer field data from the Santos Basin, Brazil. Internal multiples are predicted and adaptively subtracted, thereby improving the geological interpretation of the target area. These results imply that our adaptive double-focusing method is sufficiently robust for the application to three-dimensional field data, although the key acquisition parameters and limitations will naturally differ in other geological settings and for other types of acquisition.","Acoustics; Data processing; Seismics","en","journal article","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:c58ca52c-3c3b-4300-9b4f-40890d292ba3","http://resolver.tudelft.nl/uuid:c58ca52c-3c3b-4300-9b4f-40890d292ba3","The Effect of Biomass Pellet Length, Test Conditions and Torrefaction on Mechanical Durability Characteristics According to ISO Standard 17831-1","Gilvari, H. (TU Delft Transport Engineering and Logistics); de Jong, W. (TU Delft Large Scale Energy Storage); Schott, D.L. (TU Delft Transport Engineering and Logistics)","","2020","With the recent increase in biomass pellet consumption, the mechanical degradation of pellets during transport and handling has become more important. ISO standard 17831-1 is an accepted global standard that is commonly used amongst researchers and industries to determine the mechanical durability of pellets. However, the measured mechanical durability sometimes fails to match the certificate accompanying the shipment. In such cases, pellet length specifications are suspected to play a role. This paper studies the effect of pellet length on mechanical durability for various types of commercially produced biomass pellets. In addition, the effect of test conditions and torrefaction on the mechanical durability of biomass pellets has been investigated. To study the effect of pellet length, pellets were classified into three groups: shorter than 15 mm, 15 to 30 mm, and longer than 30 mm, and their length distributions were measured using an in-house image processing tool. Then, the mechanical durability of pellets was measured using ISO standard 17831-1. The mechanical durability results were compared to random-sized pellet samples. To study the effect of test conditions, the mechanical durability test was operated at different time intervals to elucidate the effect of tumbling at different conditions. The results show that the mechanical durability depends highly on the length distribution of the pellets, with a difference between categories of up to 13%. It was also observed that the mechanical durability remains relatively constant after a specific time interval. Based on the results, we highly recommend modifying the current ISO standard to account for the pellet length distribution (PLD)","biomass pellet; mechanical durability; ISO standard 17831-1; pellet length distribution; image processing","en","journal article","","","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:285e24b2-1bec-4733-a807-5dd601715347","http://resolver.tudelft.nl/uuid:285e24b2-1bec-4733-a807-5dd601715347","Forecasting Multi-Dimensional Processes Over Graphs","Natali, A. (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Multimedia Computing); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","The forecasting of multi-variate time processes through graph-based techniques has recently been addressed under the graph signal processing framework. However, problems in the representation and the processing arise when each time series carries a vector of quantities rather than a scalar one. To tackle this issue, we devise a new framework and propose new methodologies based on the graph vector autoregressive model. More explicitly, we leverage product graphs to model the high-dimensional graph data and develop multidimensional graph-based vector autoregressive models to forecast future trends with a number of parameters that is independent of the number of time series and a linear computational complexity. Numerical results demonstrating the prediction of moving point clouds corroborate our findings.","Forecasting; graph signal processing; product graphs; time series; vector autoregressive model","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-11-14","","","Signal Processing Systems","","",""
"uuid:31a08eed-9416-4787-ac30-9f41f45695fd","http://resolver.tudelft.nl/uuid:31a08eed-9416-4787-ac30-9f41f45695fd","ArrowSAM: In-Memory Genomics Data Processing Using Apache Arrow","Ahmad, T. (TU Delft Computer Engineering); Ahmed, N. (TU Delft Numerical Analysis; TU Delft Quantum & Computer Engineering); Peltenburg, J.W. (TU Delft Computer Engineering); Al-Ars, Z. (TU Delft Computer Engineering)","","2020","The rapidly growing size of genomics data bases, driven by advances in sequencing technologies, demands fast and cost-effective processing. However, processing this data creates many challenges, particularly in selecting appropriate algorithms and computing platforms. Computing systems need data closer to the processor for fast processing. Traditionally, due to cost, volatility and other physical constraints of DRAM, it was not feasible to place large amounts of working data sets in memory. However, new emerging storage class memories allow storing and processing big data closer to the processor. In this work, we show how the commonly used genomics data format, Sequence Alignment/Map (SAM), can be presented in the Apache Arrow in-memory data representation to benefit of in-memory processing and to ensure better scalability through shared memory objects, by avoiding large (de)-serialization overheads in cross-language interoperability. To demonstrate the benefits of such a system, we propose ArrowSAM, an in-memory SAM format that uses the Apache Arrow framework, and integrate it into genome pre-processing pipelines including BWA-MEM, Picard and Sambamba. Results show 15x and 2.4x speedups as compared to Picard and Sambamba, respectively. The code and scripts for running all workflows are freely available at https://github.com/abs-tudelft/ArrowSAM.","Apache Arrow; Big Data; Genomics; In-Memory; Parallel Processing; Whole Genome/Exome Sequencing","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:3e2c3bbf-1552-48cf-b024-f9125768ef85","http://resolver.tudelft.nl/uuid:3e2c3bbf-1552-48cf-b024-f9125768ef85","A Fast Protection of Multi-terminal HVDC System Based on Transient Signal Detection","Liu, Lian (Prysmian Group); Liu, Zhou (Aalborg University); Popov, M. (TU Delft Intelligent Electrical Power Grids); Palensky, P. (TU Delft Intelligent Electrical Power Grids); van der Meijden, M.A.M.M. (TU Delft Intelligent Electrical Power Grids; TenneT TSO B.V.)","","2020","HVDC technologies are widely acknowledged as one of solutions for the interconnection of renewable energy resources with the main electric power grid. The application of the latest modular multi-level converter (MMC) makes power conversion much more efficient. Due to the relatively low impedance in a DC system, DC fault currents may rise to an extremely high level in a short period of time, which can be very dangerous for HVDC converters. To improve the sustainability and security of electricity transmission, protection solutions for HVDC systems are being developed. Nevertheless, they have various drawbacks on fault signal detection and timely clearance. This paper proposes a protection method that provides a fast and reliable solution addressing those drawbacks. A protection algorithm based on travelling wave simulation and analysis is proposed to detect abrupt transient signals. The algorithm shows high efficiency, reliability, selectivity and has low sampling frequency requirements. The proposed protection method has been validated through a cyber-physical simulation platform, developed using a real-time digital simulator (RTDS) and IEC 61850 communication links. The obtained results show that the proposed method has good potential for practical applications.","Electromagnetic transients (EMT); high voltage direct current (HVDC); modular multi-level converter (MMC); protection; real time digital simulator (RTDS); signal processing; voltage source converter (VSC); IEC 61850","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-08-17","","","Intelligent Electrical Power Grids","","",""
"uuid:b81b5570-1a67-4132-8c29-f3eeba57f3a8","http://resolver.tudelft.nl/uuid:b81b5570-1a67-4132-8c29-f3eeba57f3a8","The revitalization of service orientation: a business services model","Plugge, A.G. (TU Delft Marketing and Consumer Research); Nikou, S.N. (TU Delft Information and Communication Technology; Åbo Akademi University; Stockholm University); Bouwman, W.A.G.A. (TU Delft Information and Communication Technology)","","2020","Purpose: Due to the convergence of rapid business developments and digitization challenges, service orientation is back on the research agenda as a concept to improve firms’ business services. Yet, little is known about the type of determinants that are relevant and to what degree they affect a firm’s service-oriented strategy. Design/methodology/approach: Building on structural equation modeling (SEM) and a unique data set of 131 international firms from different continents, the authors identify and analyze the key determinants in the context of a firm’s service-oriented strategy. Findings: The findings show that in order to cater for changes, organizations have to manage and adapt the coherence of the determinants’ business services, business processes and knowledge sharing continuously. Moreover, the results show that a service-oriented strategy is not only influenced by business services as such, but business services mediate the relationships between business processes, governance and process-aware information systems to a service-oriented strategy. Research limitations/implications: A limitation is imposed by the limited sample size and the unbalanced response of participants (executive management). In future research, a more extensive survey among a broader group of participants will help the authors to develop their model further in order to generalize the results, as well as more finely grained research related to geography and size might be pursued. Future empirical research is necessary to identify and test the relationships between other constructs and study their effect on a firm’s service-oriented strategy. Practical implications: On a practical level, the authors postulate that an organization’s executive management should pay attention to invest in an organizational entity (department) that manages business services continuously. This organizational entity has to ensure that related processes and knowledge sharing are in place to establish and maintain a service-oriented strategy. Originality/value: This research contributes to service-oriented literature by operationalizing the implementation of an organization’s service-oriented strategy. The authors’ insights go beyond the findings of Aier et al. (2011). The authors found that a service-oriented strategy influences service-oriented project success positively. The authors extended these findings, based on a unique data set, by studying business services and influencing determinants (i.e. business processes, governance, PAIS and knowledge sharing) within the context of service orientation. The renewed attention to the concept of service orientation provides insights into critical determinants that influence the implementation of a service-oriented strategy.","Business processes; Business services; Governance; Organizational readiness; Process-aware information systems; Service orientation","en","journal article","","","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:7c57138e-69e9-445a-8a00-6740e30b0781","http://resolver.tudelft.nl/uuid:7c57138e-69e9-445a-8a00-6740e30b0781","Morphodynamic Evolution of a Fringing Sandy Shoal: From Tidal Levees to Sea Level Rise","Elmilady, H.M.S.M.A. (TU Delft Coastal Engineering; IHE Delft Institute for Water Education; Deltares); Van Der Wegen, M. (IHE Delft Institute for Water Education; Deltares); Roelvink, D. (TU Delft Coastal Engineering; IHE Delft Institute for Water Education; Deltares); van der Spek, A. (Deltares; Universiteit Utrecht)","","2020","Intertidal shoals are vital components of estuaries. Tides, waves, and sediment supply shape the profile of estuarine shoals. Ensuring their sustainability requires an understanding of how such systems will react to sea level rise (SLR). In contrast to mudflats, sandy shoals have drawn limited attention in research. Inspired by a channel-shoal system in the Western Scheldt Estuary (Netherlands), this research investigates governing processes of the long-term morphodynamic evolution of intertidal estuarine sandy shoals across different timescales. We apply a high-resolution process-based numerical model (Delft3D) to generate a channel-shoal system in equilibrium and expose the equilibrium profile to variations in wave forcing and SLR. Combined tidal action and wave forcing initiate ridge formation at the seaward shoal edge, which slowly propagates landward until a linear equilibrium profile develops within 200 years. Model simulations in which forcing conditions have been varied to reproduce observations show that the bed is most dynamic near the channel-shoal interface. A decrease/increase in wave forcing causes the formation/erosion of small tidal levees at the shoal edge, which shows good resemblance to observed features. The profile recovers when regular wave forcing applies again. Sandy shoals accrete in response to SLR with a long (decades) bed-level adaptation lag eventually leading to intertidal area loss. This lag depends on the forcing conditions and is lowest near the channel and gradually increases landward. Adding mud makes the shoal more resilient to SLR. Our study suggests that processes near the channel-shoal interface are crucial to understanding the long-term morphodynamic development of sandy shoals.","Intertidal sandy shoals; Long-term estuarine morphodynamics; Process-based modeling; Sea-level rise; Tidal levees; Waves","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:bddfd806-cefe-4a6f-9133-546017eb4dc4","http://resolver.tudelft.nl/uuid:bddfd806-cefe-4a6f-9133-546017eb4dc4","The healthcare design dilemma: perils of a technology-driven design process for medical products","Wilke, H. (Anhalt University of Applied Sciences Dessau); Badke-Schaub, P.G. (TU Delft Methodologie en Organisatie van Design); Thoring, K.C. (TU Delft Design Aesthetics; Anhalt University of Applied Sciences Dessau)","","2020","This paper reports an embedded single case study from a globally operating manufacturer for digital healthcare products. Based on nine semi-structured interviews, document analysis, and a diary study among employees, we were able to gain insights on the daily business routines and interactions of the design team, the UX research team, and the product management department. The results revealed several unexpected insights that indicate a practical mismatch between user-centred design processes learned from the textbook and design practice in the healthcare sector that warrant further research.","design process; healthcare design; industrial design; user experience; user-centred design","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:bbb6d38c-81e0-423a-bb82-bc3d8c261436","http://resolver.tudelft.nl/uuid:bbb6d38c-81e0-423a-bb82-bc3d8c261436","From surface seismic data to reservoir elastic parameters using a full-wavefield redatuming approach","Garg, A. (TU Delft ImPhys/Computational Imaging); Verschuur, D.J. (TU Delft ImPhys/Computational Imaging)","","2020","Traditionally, reservoir elastic parameters inversion suffers from the overburden multiple scattering and transmission imprint in the local input data used for the target-oriented inversion. In this paper, we present a full-wavefield approach, called reservoir-oriented joint migration inversion (JMI-res), to estimate the high-resolution reservoir elastic parameters from surface seismic data.As a first step in JMI-res, we reconstruct the fully redatumed data (local impulse responses) at a suitable depth above the reservoir from the surface seismic data, while correctly accounting for the overburden interal multiples and transmission losses. Next, we apply a localized elastic full waveform inversion on the estimated impulse responses to get the elastic parameters. We show that JMI-res thus provides much more reliable local target impulse responses, thus yielding high-resolution elastic parameters, compared to a standard redatuming procedure based on time reversal of data. Moreover, by using this kind of approach we avoid the need to apply a full elastic full waveform inversion-type process for the whole subsurface, as within JMI-res elastic full waveform inversion is only restricted to the reservoir target domain.","Image processing; Inverse theory; Tomography; Wave scattering and diffraction; Waveform inversion","en","journal article","","","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:66dac53d-281c-4183-8fae-8039ed2db454","http://resolver.tudelft.nl/uuid:66dac53d-281c-4183-8fae-8039ed2db454","Mechanical integrity of process installations: Barrier alarm management based on bowties","Schmitz, P.J.H. (TU Delft Safety and Security Science; OCI-Nitrogen); Swuste, P.H.J.J. (TU Delft Safety and Security Science); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); van Nunen, K.L.L. (TU Delft Safety and Security Science; Universiteit Antwerpen)","","2020","A Safety Research project was carried out in an ammonia plant of OCI Nitrogen, located at the Chemelot site in Geleen, The Netherlands. This research focused on the development of a method to monitor accident processes in the chemical industry mainly caused by mechanical integrity of static equipment like vessels, tanks and heat exchangers. A significant part of the mechanical integrity failure scenarios originates from material degradation and corrosion mechanisms which may develop over a relatively long-time period, possibly taking months, years or even longer. Mechanical failure scenarios from two process units have been worked out and visualized using a bowtie. The research project shows that the monitoring of early warnings can provide information about the current development of mechanical failure scenarios. In addition, early warnings can be used to initiate inspections if there is a likelihood that the mechanical failure scenario has been activated. Considering the shift from breakdown maintenance to preventive and predictive maintenance and risk-based inspection (RBI), inspections based on early warnings could also be a new step in the field of maintenance efficiency.","Ammonia; Bowtie; Integrity; Mechanical failure mechanism; Process safety indicator; Risk-based inspections","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:2cac6719-c0e0-499c-97d3-f7e70ffda7df","http://resolver.tudelft.nl/uuid:2cac6719-c0e0-499c-97d3-f7e70ffda7df","Vocabulary of radioanalytical methods (IUPAC Recommendations 2020)","Chai, Zhifang (Institute of High Energy Physics Chinese Academy of Science); Chatt, Amares (Dalhousie University); Bode, P. (TU Delft RST/Applied Radiation & Isotopes); Kučera, Jan (Czech Academy of Sciences); Greenberg, Robert (National Institute of Standards and Technology); Hibbert, David B. (University of New South Wales)","","2020","These recommendations are a vocabulary of basic radioanalytical terms which are relevant to radioanalysis, nuclear analysis and related techniques. Radioanalytical methods consider all nuclear-related techniques for the characterization of materials where 'characterization' refers to compositional (in terms of the identity and quantity of specified elements, nuclides, and their chemical species) and structural (in terms of location, dislocation, etc. of specified elements, nuclides, and their species) analyses, involving nuclear processes (nuclear reactions, nuclear radiations, etc.), nuclear techniques (reactors, accelerators, radiation detectors, etc.), and nuclear effects (hyperfine interactions, etc.). In the present compilation, basic radioanalytical terms are included which are relevant to radioanalysis, nuclear analysis and related techniques.","nuclear effects; nuclear processes; nuclides; radioanalytical chemistry; terminology","en","journal article","","","","","","Accepted Author Manuscript","","2021-11-16","","","RST/Applied Radiation & Isotopes","","",""
"uuid:73b44a4d-26f5-40ee-bf1a-8dcc07b27097","http://resolver.tudelft.nl/uuid:73b44a4d-26f5-40ee-bf1a-8dcc07b27097","Integration of operation and design of solar fuel plants: A carbon dioxide to methanol case study","Huesman, A.E.M. (TU Delft ChemE/Product and Process Engineering)","","2020","Operation and design of solar fuel plants involves a decision about the degree of coupling between the solar electricity profile and the plant. Full decoupling needs large scale battery storage to ensure power availability during the night while full coupling requires high conversion capacity during the day to realize the required average methanol production. An extended optimal control framework is presented that determines economic optimal operation. Extended indicates that operational and design degrees of freedom are considered simultaneously. Using a simplified dynamic model of the plant, the framework minimizes total fuel cost for an estimated cost structure by the year 2030. The results show that full coupling is economically preferred and that limited operational flexibility increases the manufacturing cost of methanol from approximately 1000 to 1200 USD/ton. Analysis of the results reveals the cost structure determines an Operational Tipping Point that marks a clear transition from coupled to decoupled operation.","Dynamic optimization; Process operation and design; Solar fuel plant","en","journal article","","","","","","Accepted Author Manuscript","","2022-06-10","","","ChemE/Product and Process Engineering","","",""
"uuid:8286488d-4bd6-4ccf-8829-ca9459765ecb","http://resolver.tudelft.nl/uuid:8286488d-4bd6-4ccf-8829-ca9459765ecb","Lattice Fracture Model for Concrete Fracture Revisited: Calibration and Validation","Chang, Z. (TU Delft Materials and Environment); Zhang, Hongzhi (Shandong University); Schlangen, E. (TU Delft Materials and Environment); Šavija, B. (TU Delft Materials and Environment)","","2020","The lattice fracture model is a discrete model that can simulate the fracture process of cementitious materials. In this work, the Delft lattice fracture model is reviewed and utilized for fracture analysis. First, a systematic calibration procedure that relies on the combination of two uniaxial tensile tests is proposed to determine the input parameters of lattice elements—tensile strength, compressive strength and elastic modulus. The procedure is then validated by simulating concrete fracture under complex loading and boundary conditions: Uniaxial compression, three-point bending, tensile splitting, and double-edge-notch beam shear. Simulation results are compared to experimental findings in all cases. The focus of this publication is therefore not only on summarizing existing knowledge and showing the capabilities of the lattice fracture model; but also to fill in an important gap in the field of lattice modeling of concrete fracture; namely, to provide a recommendation for a systematic model calibration using experimental data. Through this research, numerical analyses are performed to fully understand the failure mechanisms of cementitious materials under various loading and boundary conditions. While the model presented herein does not aim to completely reproduce the load-displacement curves, and due to its simplicity results in relatively brittle post-peak behavior, possible solutions for this issue are also discussed in this work.","Concrete; Fracture process; Lattice fracture model; Size effect; Slenderness","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:10126a3c-be32-4948-a44c-39cf81a84f46","http://resolver.tudelft.nl/uuid:10126a3c-be32-4948-a44c-39cf81a84f46","Structured catalysts and reactors – Perspectives for demanding applications","Kapteijn, F. (TU Delft ChemE/Catalysis Engineering); Moulijn, J.A. (TU Delft ChemE/Catalysis Engineering)","","2020","In this perspective paper a brief overview is given of the past developments in the field of structured catalysts and reactors, the potential for process intensification, energy and materials efficiency. Current exciting new developments for demanding processes are highlighted and directions indicated that contribute to a future sustainable chemical industry.","(Nano) structured; Energy management; Foams; Monoliths; Packed beds; Process intensification; Structured catalytic reactors","en","journal article","","","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:dd0930fe-59d8-47d3-9c18-8541d29b3f55","http://resolver.tudelft.nl/uuid:dd0930fe-59d8-47d3-9c18-8541d29b3f55","Comment on ‘Is ‘re-mobilisation’ nature restoration or nature destruction? A commentary’ by I. Delgado-Fernandez, R.G.D. Davidson-Arnott & P.A. Hesp","Arens, Sebastiaan M. (Arens Bureau for Beach and Dune Research); de Vries, S. (TU Delft Coastal Engineering); Geelen, Luc HWT (Waternet); Ruessink, Gerben (Universiteit Utrecht); van der Hagen, Harrie GJM (Dunea); Groenedijk, Dick (PWN Drinking Water Supply Company)","","2020","In their recently published paper, Delgado-Fernandez et al. (2019) critically review the limitations and dangers of the relatively recent shift towards dune rejuvenation management in North-western Europe. We would like to comment on the paper from the Dutch perspective.","Aeolian processes; Bio diversity; Coastal dunes; Management; Remobilisation","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:389e8179-5220-4987-8765-6a1fdd30a168","http://resolver.tudelft.nl/uuid:389e8179-5220-4987-8765-6a1fdd30a168","State-Space Network Topology Identification from Partial Observations","Coutino, Mario (TU Delft Signal Processing Systems); Isufi, E. (University of Pennsylvania); Maehara, Takanori (RIKEN Center for Emergent Matter Science (CEMS)); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","In this article, we explore the state-space formulation of a network process to recover from partial observations the network topology that drives its dynamics. To do so, we employ subspace techniques borrowed from system identification literature and extend them to the network topology identification problem. This approach provides a unified view of network control and signal processing on graphs. In addition, we provide theoretical guarantees for the recovery of the topological structure of a deterministic continuous-time linear dynamical system from input-output observations even when the input and state interaction networks are different. Our mathematical analysis is accompanied by an algorithm for identifying from data,a network topology consistent with the system dynamics and conforms to the prior information about the underlying structure. The proposed algorithm relies on alternating projections and is provably convergent. Numerical results corroborate the theoretical findings and the applicability of the proposed algorithm.","graph signal processing; inverse eigenvalue problems; network topology identification; signal processing over networks; state-space models; Inverse eigenvalue problems","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-08-20","","","Signal Processing Systems","","",""
"uuid:70ddc879-3333-4f06-a59c-ed25a5c68af3","http://resolver.tudelft.nl/uuid:70ddc879-3333-4f06-a59c-ed25a5c68af3","Predictive maintenance of systems subject to hard failure based on proportional hazards model","Hu, Jiawen (National University of Singapore); Chen, P. (TU Delft Statistics)","","2020","The remaining useful lifetime (RUL) estimated from the in-situ degradation data has shown to be useful for online predictive maintenance. In the literature, the RUL is often estimated by assuming a soft-failure threshold for the degradation data. In practice, however, systems may not be subject to the degradation-induced soft failures. Instead, the systems are deemed to be fail when they cannot perform the intended function, and such failures are known as hard failures. Because there are no fixed thresholds for hard failures, the corresponding RUL estimation is not an easy task, which causes difficulties in finding the optimal maintenance schedule. In this study, a Weibull proportional hazards model is proposed to jointly model the degradation data and the failure time data. The degradation data are treated as the time-varying covariates so that the degradation does not directly lead to system failures, but increases the hazard rate of hard failures. A random-effects Wiener process is proposed to model the degradation data by considering the system heterogeneities. Based on the developed proportional hazards model, closed-form distribution of the RUL is derived upon each inspection and the optimal maintenance schedule is then obtained by minimizing the system maintenance cost. The proposed maintenance strategy is successfully applied to predictive maintenance of lead-acid batteries.","Condition-based maintenance; Degradation data; Weibull distribution; Wiener process","en","journal article","","","","","","","","2021-07-28","","","Statistics","","",""
"uuid:8435ef15-23b0-4bcc-9181-a91f3cce7005","http://resolver.tudelft.nl/uuid:8435ef15-23b0-4bcc-9181-a91f3cce7005","Multi-Task Sensor Resource Balancing Using Lagrangian Relaxation and Policy Rollout","Schöpe, M.I. (TU Delft Microwave Sensing, Signals & Systems); Driessen, J.N. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2020","The sensor resource management problem in a multi-object tracking scenario is considered. In order to solve it, a dynamic budget balancing algorithm is proposed which models the different sensor tasks as partially observable Markov decision processes. Those are being solved by applying a combination of Lagrangian relaxation and policy rollout. The algorithm converges to a solution which is close to the optimal steady-state solution. This is shown through simulations of a two-dimensional tracking scenario. Moreover, it is demonstrated how the algorithm allocates the sensor time budgets dynamically to a changing environment and takes predictions of the future situation into account.","Lagrangian Relaxation; Partially Observable Markov Decision Process; Policy Rollout; Sensor Resource Management","en","conference paper","IEEE","","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:e365b36e-b5c0-4134-80c4-252839f66ded","http://resolver.tudelft.nl/uuid:e365b36e-b5c0-4134-80c4-252839f66ded","Making the design process in design education explicit: two exploratory case studies","van Dooren, E.J.G.C. (TU Delft Architectural Engineering); Els, Boshuizen (Open University of the Netherlands; University of Turku); van Merriënboer, J.J.G. (Universiteit Maastricht); Asselbergs, M.F. (TU Delft Architectural Engineering); van Dorst, M.J. (TU Delft Environmental Technology and Design)","","2020","The aim of design education is that students learn to think and act like designers. However, the focus in the design studio is mainly on the design product, whereas the ‘why and how’ of the design process are barely addressed. A risk of learning by performing real-life tasks without addressing the skills involved, that is, without receiving appropriate support and guidance, is that learners are overwhelmed by the complexity of the tasks.
To make the design process explicit, a conceptual framework is developed in earlier research. This paper reports a first evaluation how articulation of basic designerly1 skills with the help of a conceptual tool is perceived by students and teachers and whether it changes students’ conceptions of the design process and their self-efficacy.
In two exploratory case studies, questionnaires give insight. The first is a short intervention in which student’s perception is measured. In the second case study the design process was addressed in the design studio. It measured changes in student’s conceptions and self- efficacy. Also, insight is provided in teacher’s perception of working with the framework.
The results of these exploratory studies indicate a positive effect. The teachers involved perceived the framework as a structuring factor during the tutoring sessions, for both teacher and students. Students did perceive explanation of the design process as being helpful. A change in students’ design conceptions and an increase in self-efficacy is seen.
The accident processes are visualized as scenarios in bowties. This research focuses on the status of the preventive barriers on the left-hand side of the bowtie. Both the quality – expressed in reliability/availability and effectiveness - and the activation of the barrier system give an indication of the development of the accident scenarios and the likelihood of the central event. This likelihood is calculated as a loss of risk reduction compared to the original design. The calculations of this assessment result in indicators called ""preventive barrier indicators"". They provide an indication of the likelihood of the scenario. This likelihood is not an absolute value, but rather an indication of the change in the status quo which should initiate further action. This manuscript shows what this action must be and what the urgency of the action is.
In the presented concept, every technical change of the barrier system is used to determine the current development and likelihood of the scenario. If the quality parameters of the barriers are accommodated in an automated system, the preventive barrier indicator can be calculated and displayed in real time. This is different for non-technical changes: they will have to be entered and processed manually.
These two attitudes are inherent to landscape architecture, which traditionally prioritizes the site over the programme, and—because of the long term, time-based condition of the landscape—is forced to think in open-ended designs. In this paper we discuss a selection of graduation projects of the landscape architecture track at the TU Delft in order to illustrate how inclusivity is inherent to a complete understanding of landscape architecture. Four essential perspectives on analysis and design—perception, palimpsest, process and scale continuum—are discussed in order to reveal their capacity to serve as a basis for designing inclusive urban landscapes.","landscape architecture; education; perception; palimpsest; process; scale-continuum; inclusive urbanism; generous cities","en","journal article","","","","","","Vol. 6 (2020): Inclusive Urbanism: Advances in research, education and practice. ISBN 978-94-6366-317-5","","","","","Landscape Architecture","","",""
"uuid:d5b48841-d9cd-4420-bae9-2bb4676919a3","http://resolver.tudelft.nl/uuid:d5b48841-d9cd-4420-bae9-2bb4676919a3","Patterns of Circular Transition: What Is the Circular Economy Maturity of Belgian Ports?","Haezendonck, Elvira (Vrije Universiteit Brussel; Universiteit Antwerpen); Van den Berghe, K.B.J. (TU Delft Urban Development Management)","","2020","Large seaport hubs in Northwestern Europe are aiming to develop as circular hotspots and are striving to become first movers in the circular economy (CE) transition. In order to facilitate their transition, it is therefore relevant to unravel potential patterns of the circular transition that ports are currently undertaking. In this paper, we explore the CE patterns of five Belgian seaports. Based on recent (strategy) documents from port authorities and on in-depth interviews with local port executives, the circular initiatives of these ports are mapped, based on their spatial characteristics and transition focus. The set of initiatives per port indicates its maturity level in terms of transition towards a circular approach. For most studied seaports, an energy recovery focus based on industrial symbiosis initiatives seems to dominate the first stages in the transition process. Most initiatives are not (yet) financially sustainable, and there is a lack of information on potential new business models that ports can adopt in view of a sustainable transition. The analysis of CE patterns in this paper contributes to how ports lift themselves out of the linear lock-in, as it demonstrates that ports may walk a different path and at a diverging speed in their CE transition, but also that the Belgian ports so far have focused too little on their cargo orchestrating role in that change process. Moreover, it offers a first insight into how integrated and sustainable the ports’ CE initiatives currently are.","Belgium; Case studies; Circular economy; Circular initiative; Maturity; Patterns; Ports; Process; Strategy; Transition; Circular Built Environment","en","journal article","","","","","","","","","","","Urban Development Management","","",""
"uuid:7396df62-e119-4637-8812-efbbfb7e1ccd","http://resolver.tudelft.nl/uuid:7396df62-e119-4637-8812-efbbfb7e1ccd","Topology-Aware Joint Graph Filter and Edge Weight Identification for Network Processes","Natali, A. (TU Delft Signal Processing Systems); Coutino, Mario (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","Data defined over a network have been successfully modelled by means of graph filters. However, although in many scenarios the connectivity of the network is known, e.g., smart grids, social networks, etc., the lack of well-defined interaction weights hinders the ability to model the observed networked data using graph filters. Therefore, in this paper, we focus on the joint identification of coefficients and graph weights defining the graph filter that best models the observed input/output network data. While these two problems have been mostly addressed separately, we here propose an iterative method that exploits the knowledge of the support of the graph for the joint identification of graph filter coefficients and edge weights. We further show that our iterative scheme guarantees a non-increasing cost at every iteration, ensuring a globally-convergent behavior. Numerical experiments confirm the applicability of our proposed approach.","Filtering over graphs; Graph filter identification; Graph signal processing; Networked data modeling; Topology identification","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-04-20","","","Signal Processing Systems","","",""
"uuid:d4751daa-453a-4b4a-a4fd-7dd768fdb14e","http://resolver.tudelft.nl/uuid:d4751daa-453a-4b4a-a4fd-7dd768fdb14e","The exponential resolvent of a markov process and large deviations for markov processes via hamilton-jacobi equations","Kraaij, R.C. (TU Delft Applied Probability)","","2020","We study the Hamilton-Jacobi equation f − λHf = h, where Hf = e−f Aef and where A is an operator that corresponds to a well-posed martingale problem. We identify an operator that gives viscosity solutions to the Hamilton-Jacobi equa-tion, and which can therefore be interpreted as the resolvent of H. The operator is given in terms of an optimization problem where the running cost is a path-space relative entropy. Finally, we use the resolvents to give a new proof of the abstract large deviation result of Feng and Kurtz (2006).","Hamilton-Jacobi equations; Large deviations; Markov processes; Non-linear resolvent","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:cc9d0b5e-d6b5-42ae-9355-ffc81bd1267c","http://resolver.tudelft.nl/uuid:cc9d0b5e-d6b5-42ae-9355-ffc81bd1267c","Framing and tracing human-centered design teams' method selection: an examination of decision-making strategies","Rao, Vivek (University of California); Kim, E.Y. (TU Delft Marketing and Consumer Research); Kwon, Jieun (University of Minnesota Twin Cities); Agogino, Alice M. (University of California); Goucher-Lambert, Kosa (University of California)","","2020","Designers’ choices of methods are well known to shape project outcomes. However, questions remain about why design teams select particular methods and how teams’ decision-making strategies are influenced by project- and process-based factors. In this mixed-methods study, we analyze novice design teams’ decision-making strategies underlying 297 selections of human-centered design methods over the course of three semester-long project-based engineering design courses. We propose a framework grounded in 100+ factors sourced from new product development literature that classifies design teams’ method selection strategy as either Agent- (A), Outcome- (O), or Process- (P) driven, with eight further subclassifications. Coding method selections with this framework, we uncover three insights about design team method selection. First, we identify fewer outcomes-based selection strategies across all phases and innovation types. Second, we observe a shift in decision-making strategy from user-focused outcomes in earlier phases to product-based outcomes in later phases. Third, we observe that decision-making strategy produces a greater heterogeneity of method selections as compared to the class average as a whole, or project type alone. These findings provide a deeper understanding of designers’ method selection behavior and have implications for effective management of design teams, development of automated design support tools to aid design teams, and curation of design method repositories","Decision theory; Design education; Design methodology; Design process; Design teams; Design theory; Design theory and methodology; User-centered design","en","journal article","","","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:81f88b60-fcb4-434f-8948-2082c2350454","http://resolver.tudelft.nl/uuid:81f88b60-fcb4-434f-8948-2082c2350454","A product development approach advisor for navigating common design methods, processes, and environments","Stewart, Shelby (Stevens Institute of Technology); Giambalvo, Jack (Stevens Institute of Technology); Vance, Julia (Stevens Institute of Technology); Faludi, Jeremy (TU Delft Circular Product Design); Hoffenson, Steven (Stevens Institute of Technology)","","2020","Many different product development approaches are taught and used in engineering and management disciplines. These formalized design methods, processes, and environments differ in the types of projects for which they are relevant, the project components they include, and the support they provide users. This paper details a review of sixteen well-established product development approaches, the development of a decision support system to help designers and managers navigate these approaches, and the administration of a survey to gather subjective assessments and feedback from design experts. The included approaches—design thinking, systems thinking, total quality management, agile development, waterfall process, engineering design, spiral model, vee model, axiomatic design, value-driven design, decision-based design, lean manufacturing, six sigma, theory of constraints, scrum, and extreme programming—are categorized based on six criteria: complexity, guidance, phase, hardware or software applicability, values, and users. A decision support system referred to as the Product Development Approach Advisor (PD Advisor) is developed to aid designers in navigating these approaches and selecting an appropriate approach based on specific project needs. Next, a survey is conducted with design experts to gather feedback on the support system and the categorization of approaches and criteria. The survey results are compared to the original classification of approaches by the authors to validate and provide feedback on the PD Advisor. The findings highlight the value and limitations of the PD Advisor for product development practice and education, as well as the opportunities for future work.","Decision support system; Design methods; Design processes; Engineering design; Product development","en","journal article","","","","","","","","","","","Circular Product Design","","",""
"uuid:ad4f5f97-c83f-4823-8565-acffe5accf46","http://resolver.tudelft.nl/uuid:ad4f5f97-c83f-4823-8565-acffe5accf46","Vector Doppler imaging of small vessels using directionally filtered Power Doppler images","Generowicz, B.S. (Erasmus MC); Verhoef, Luuk (Erasmus MC); Mastik, Frits (Erasmus MC); Dijkhuizen, Stefanie (Erasmus MC); van Dorp, Nikki (Erasmus MC); Voorneveld, Jason (Erasmus MC); Bosch, Johannes (Erasmus MC); Kumar, Karishma (Student TU Delft); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","Power Doppler (PD) imaging has become a staple in high frame rate ultrasound imaging due to its ability to image small vessels and slow-moving flows, such as in the case of imaging blood flow in the brain. Alternatively, color Doppler (CD) can be used to determine the one-dimensional directional information of the blood scatterers. This can help determine if the flow is arterial or venous, or distinguish between adjacent flows that have an opposite direction. Current methods for estimating 2D blood velocity vectors rely mostly on trigonometric solutions using synthetic apertures or, large plane-wave angles in transmission and sub-apertures in receive to be able to resolve the 2D vector. Relative to PD or CD techniques, these methods are more computationally expensive and have not been successfully used to image blood flow direction within micrometer sized vasculature. In this paper, we propose to use the orientations of the vessels derived from a directional spatial filter in combination with the CD signal to enhance the PD images with directional information. This approach was tested on simulated data as well as on a 2D image containing brain vasculature of a mouse.","Color Doppler; Doppler; Gabor filter; Power Doppler; Signal Processing; Vector Doppler","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-05-17","","","Signal Processing Systems","","",""
"uuid:da333e94-0495-42eb-b3ee-3ea0ce8acbd8","http://resolver.tudelft.nl/uuid:da333e94-0495-42eb-b3ee-3ea0ce8acbd8","A Study on Reference Microphone Selection for Multi-Microphone Speech Enhancement","Zhang, Jie (University of Science and Technology of China (USTC), Hefei); Chen, Huawei (Nanjing University of Aeronautics and Astronautics); Hendriks, R.C. (TU Delft Signal Processing Systems)","","2020","Multi-microphone speech enhancement methods typically require a reference position with respect to which the target signal is estimated. Often, this reference position is arbitrarily chosen as one of the reference microphones. However, it has been shown that the choice of the reference microphone can have a significant impact on the final noise reduction performance. In this paper, we therefore theoretically analyze the impact of selecting a reference on the noise reduction performance with near-end noise being taken into account. Following the generalized eigenvalue decomposition (GEVD) based optimal variable span filtering framework, we find that for any linear beamformer, the output signal-to-noise ratio (SNR) taking both the near-end and far-end noise into account is reference dependent. Only when the near-end noise is neglected, the output SNR of rank-1 beamformers does not depend on the reference position. However, in general for rank-r beamformers with r>1 (e.g., the multichannel Wiener filter) the performance does depend on the reference position. Based on these, we propose an optimal algorithm for microphone reference selection that maximizes the output SNR. In addition, we propose a lower-complexity algorithm that is still optimal for rank-1 beamformers, but sub-optimal for the general rank-r beamformers. Experiments using a simulated microphone array validate the effectiveness of both proposed methods and show that in terms of quality, several dB can be gained by selecting the proper reference microphone.","Acoustic distortion; Array signal processing; low-rank approximation; Microphone arrays; Microphones; multi-channel beamforming; Noise reduction; reference microphone; relative acoustic transfer function; Signal to noise ratio; Speech enhancement; variable span linear filters","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:c9c93b87-b8cd-45f0-bdcc-af029500b729","http://resolver.tudelft.nl/uuid:c9c93b87-b8cd-45f0-bdcc-af029500b729","Lean Toolbox Approach for Effective Preparation of Housing Refurbishment Projects Using Critical Success Factors","Vrijhoef, R. (TU Delft Design & Construction Management); van Dijkhuizen, M.J. (TU Delft Integral Design & Management; Hogeschool Utrecht)","","2020","Refurbishment projects notably for social housing are special kinds of project for many reasons, including inflexible existing stock, low available budgets, involvement of residents staying in their homes during construction. Lean tools could be helpful not only during construction, but also in the preparation of projects including requirements definition, budgeting, design, engineering and planning. The preparation phase also has typical peculiarities including political and social aspects, and is often time and cost consuming. Much lean research has focussed on improving the construction of new built and private sector projects. In contrast this paper aims to demonstrate the merits of lean tooling in the preparation phase of social housing refurbishments. The research reported examined lean tooling applications and their effects on project success in selected case studies of social housing refurbishments in the Netherlands. The research was a design based action research shaping a preselected catalogue of tools i.e. lean toolbox. Next tools were selected together with practitioners, for application in the case projects. After those interventions interviews were held to registers effects on critical success factors in the projects. Most of the tool applications from the lean toolbox approach appeared to be effective in harnessing critical success factors in the projects.","Action research; Housing refurbishment; Lean construction process; Lean toolbox; Project preparation","en","conference paper","","","","","","","","","","","Design & Construction Management","","",""
"uuid:84ef11ba-1528-4401-a3bc-6abba7caca04","http://resolver.tudelft.nl/uuid:84ef11ba-1528-4401-a3bc-6abba7caca04","Workshop on Interdisciplinary Insights into Group and Team Dynamics","Hung, H.S. (TU Delft Pattern Recognition and Bioinformatics); Murray, Gabriel (University of the Fraser Valley); Varni, Giovanna (Telecom Paris Tech); Lehmann-Willenbrock, Nale (Universität Hamburg); Gerpott, Fabiola H. (WHU - Otto Beisheim School of Management, Vallendar); Oertel, Catharine (TU Delft Interactive Intelligence)","","2020","There has been gathering momentum over the last 10 years in the study of group behavior in multimodal multiparty interactions. While many works in the computer science community focus on the analysis of individual or dyadic interactions, we believe that the study of groups adds an additional layer of complexity with respect to how humans cooperate and what outcomes can be achieved in these settings. Moreover, the development of technologies that can help to interpret and enhance group behaviours dynamically is still an emerging field. Social theories that accompany the study of groups dynamics are in their infancy and there is a need for more interdisciplinary dialogue between computer scientists and social scientists on this topic. This workshop has been organised to facilitate those discussions and strengthen the bonds between these overlapping research communities","affective computing; group dynamics; multimodal interaction; multiparty interaction; social psychology; social signal processing","en","conference paper","Association for Computing Machinery (ACM)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-04-08","","","Pattern Recognition and Bioinformatics","","",""
"uuid:ae7619df-ef5a-4364-a0d6-f3103a006497","http://resolver.tudelft.nl/uuid:ae7619df-ef5a-4364-a0d6-f3103a006497","Joint Features Extraction for Multiple Moving Targets Using (Ultra-)Wideband FMCW Signals in the Presence of Doppler Ambiguity","Xu, S. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2020","This article addresses the joint estimation of range, velocity and azimuth for multiple fast-moving targets using (ultra-)wideband (UWB) frequency-modulated continuous-wave (FMCW) radar with a phased array in the presence of Doppler ambiguities. The range migration of moving targets is described with the coupling of the fast-time and slow-time (chirp index), leading to the smearing of the target Doppler spectrum. This phenomenon degrades the performance of conventional detection and estimation techniques.As with range-Doppler processing, the estimation accuracy for direction-of-arrival (DOA) with conventional narrowband-based algorithms significantly degrades if a UWB signal is deployed. For the FMCW waveform, the wideband DOA differs from the narrowband DOA due to an extra coupling term, similar to the range migration problem.A novel spectral norm-based algorithm for joint estimation of range, velocity and DOA of fast-moving targets is proposed, taking the appropriate wideband signal model with the coupling terms into account.The proposed spectral norm-based algorithm avoids off-grid peak search and can be easily accelerated withthe power iteration algorithm; it outperforms conventional coherent integration methods in both accuracy and efficiency when using moderate data size. The advantages of the proposed algorithm and its super-resolution capability are validated withnumerical simulations.","(ultra-)wideband; Antenna arrays; Couplings; Direction-of-arrival estimation; DOA; Doppler ambiguities; Doppler effect; Estimation; FMCW; power iteration; Signal processing algorithms; Wideband","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-05-23","","","Microwave Sensing, Signals & Systems","","",""
"uuid:b9bb8ef4-6fe0-42aa-94b6-e9d575558c6b","http://resolver.tudelft.nl/uuid:b9bb8ef4-6fe0-42aa-94b6-e9d575558c6b","Assessing bio-oil co-processing routes as CO2 mitigation strategies in oil refineries","Yáñez, Édgar (Rijksuniversiteit Groningen); Meerman, Hans (Rijksuniversiteit Groningen); Ramirez, Andrea (TU Delft Energie and Industrie); Castillo, Édgar (Colombian Petroleum Institute); Faaij, Andre (Rijksuniversiteit Groningen)","","2020","The oil industry needs to reduce CO2 emissions across the entire lifecycle of fossil fuels to meet environmental regulations and societal requirements and to sustain its business. With this goal in mind, this study aims to evaluate the CO2 mitigation potential of several bio-oil co-processing pathways in an oil refinery. Techno-economic analysis was conducted on different pathways and their greenhouse gas (GHG) mitigation potentials were compared. Thirteen pathways with different bio-oils, including vegetable oil (VO), fast pyrolysis oil (FPO), hydro-deoxygenated oil (HDO), catalytic pyrolysis oil (CPO), hydrothermal liquefaction oil (HTLO), and Fischer–Tropsch fuels, were analyzed. However, no single pathway could be presented as the best option. This would depend on the criteria used and the target of the co-processing route. The results obtained indicated that up to 15% of the fossil-fuel output in the refinery could be replaced by biofuel without major changes in the core activities of the refinery. The consequent reduction in CO2 emissions varied from 33% to 84% when compared with pure equivalent fossil fuels replaced (i.e., gasoline and diesel). Meanwhile, the production costs varied from 17 to 31€/GJ (i.e., 118–213$/bbleq). Co-processing with VO resulted in the lowest overall performance among the options that were evaluated while co-processing HTLO in the hydrotreatment unit and FPO in the fluid catalytic cracking unit showed the highest potential for CO2 avoidance (69% of refinery CO2 emissions) and reduction in CO2 emissions (84% compared to fossil fuel), respectively. The cost of CO2 emissions avoided for all of the assessed routes was in the range of €99–651 per tCO2.","bio-oil; biomass; co-processing; CO mitigation; oil industry; pyrolysis oil; refinery","en","journal article","","","","","","","","","","","Energie and Industrie","","",""
"uuid:9d576cc2-9723-48a0-8bd5-c86ac4c7ca1b","http://resolver.tudelft.nl/uuid:9d576cc2-9723-48a0-8bd5-c86ac4c7ca1b","Visual Water Biography: Translating Stories in Space and Time","Bobbink, I. (TU Delft Landscape Architecture); Loen, S.S. (LILA Living Landscapes)","","2020","The supervision of water systems in many countries is centralised and taken over from local water management collectives of ‘water workers’ by governmental or other water management institutions. Communities are literally and figuratively cut-off from ‘their’ water systems, due to the increase of urbanisation and industrialisation. On account of water management, humankind changed from communities of actively engaged water workers into passive users. In so doing, crucial knowledge about how communities created, maintained, and expanded ‘living water systems’, such as rice terraces, low-pasture systems, polders, floating-gardens, brooks-mill, and tidal systems, is rapidly diminishing. Revealing stories (oral accounts) of water workers generate insights and understanding of forgotten aspects of the landscape. They hold information on how to engage with water in a more holistic way, strategies that might help in facing today’s challenges. The world in general, but planners, spatial designers, and water managers working with water, in particular, have so far taken little account of these stories. Without documenting stories that are about the dynamic interaction between people and landscape, valuable knowledge has disappeared and continues to do so. To help to overcome this knowledge gap, to learn from the past, the Visual Water Biography (VWB) is developed. The novel method is based on the Delft layer approach in which the spatial relationship of a design and its topography is studied, and developed by many authors from the faculty of landscape architecture at TU Delft in combination with the landscape biography approach. The Visual Water Biography visualises and maps: 1) knowledge and 2) engagement of water workers by focusing on 3) circular and 4) cyclical processes that are descended in the landscape. The method developed for spatial planners, researchers, and designers explicitly allows for multi-disciplinary engagement with water workers, water professionals, people from other disciplines such as historians and ecologists, and the general public. The added value of the VWB method is shown by the case of the Dutch Sprengen and Brooks system, a water system that is well documented in terms of landscape biography but less understood as a living water system.","Communities of water workers; Cyclical and circular processes; Delft layer approach; Landscape architecture; Landscape biography; Living water systems; Spatial analysis; Sprengen and Brooks system; Transformation; Visual Water Biography (VWB)","en","journal article","","","","","","","","","","","Landscape Architecture","","",""
"uuid:06dfaf6b-5a83-47d0-994e-277466127119","http://resolver.tudelft.nl/uuid:06dfaf6b-5a83-47d0-994e-277466127119","Een praktische, kwalitatieve aanpak voor het voorspellen van majeure ongevallen in de proces- industrie op basis van organisatorische factoren","Schmitz, P.J.H. (TU Delft Safety and Security Science); Swuste, P.H.J.J. (TU Delft Safety and Security Science); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); van Nunen, K.L.L. (TU Delft Safety and Security Science)","","2020","OCI Nitrogen aims to build up knowledge of (leading, proactive) indicators that provide insight into the process safety performance of the ammonia production process. Three sub-studies have already been published in TtA. The main question of this sub-study is whether major accidents in the ammonia production process can be predicted from organizational factors, also called management delivery systems. A detailed example in retrospect shows that this is possible. Qualitative information can be generated from audits or peer reviews conducted by internal and/ or external experts once every three to four years. In case of no major shortcomings or findings, it makes sense to measure quantitatively. Based on established threshold values, (management) indicators can then be determined. However, determining threshold values is not easy because the influence of organizational factors on the accident processes is difficult to determine. Much (retrospective) investigation into incidents is still needed to be able to standardize this.","delivery systems; safety management system; process safety; bowtie; indicator; ammoniak; organisatorische factoren","nl","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:18866f75-916e-4ffe-b887-c0dcbbe71be6","http://resolver.tudelft.nl/uuid:18866f75-916e-4ffe-b887-c0dcbbe71be6","A 77-GHz FMCW MIMO Radar Employing a Non-Uniform 2D Antenna Array and Substrate Integrated Waveguides","Hehenberger, Simon P. (Johannes Kepler University Linz); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems); Stelzer, Andreas (Johannes Kepler University Linz)","","2020","In state-of-the-art frequency-modulated continuous-wave (FMCW) multiple-input multiple-output (MIMO) radar systems, antennas are usually designed based on microstrip technology. They are arranged in uniform arrays such that the synthesized virtual array maximizes the angular resolution. This paper presents the design of a 77-GHz FMCW MIMO radar frontend with antennas and feed structures based on substrate integrated waveguides (SIW) and non-uniform planar arrays optimized for sidelobe suppression. A design procedure for MIMO arrays with particular emphasis on sidelobe level suppression based on convex optimization is presented, and a novel transition from differential microstrip line to SIW is utilized to feed the transmit antennas. Measurements show the successful SIW and antenna design, as well as a sidelobe level of 40 dB within the field of view (FOV) of the radar system.","antenna design; Array processing; FMCW radar; MIMO radar; radar system; slot array antenna; substrate integrated waveguide (SIW); waveguide transitions","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-06-24","","","Microwave Sensing, Signals & Systems","","",""
"uuid:24d13456-20b7-41e5-9fe5-60b9d9fc091e","http://resolver.tudelft.nl/uuid:24d13456-20b7-41e5-9fe5-60b9d9fc091e","Process systems engineering developments in Europe from an industrial and academic perspective","Kiss, A.A. (The University of Manchester; University of Twente); Grievink, J. (TU Delft ChemE/Product and Process Engineering)","","2020","Process Systems Engineering (PSE) is a discipline that deals with decision-making, at all levels and scales, by understanding any complex process system using a holistic view and a systems thinking framework. A closely related discipline (considered usually a part of PSE) is the Computer Aided Process Engineering (CAPE) which is a complementary field that focuses on developing methods and providing solution through systematic computer aided techniques for problems related to the design, control and operation of chemical systems. Nowadays, the ‘PSE’ term suffers from a branding issue to the point that PSE no longer gets the recognition that it deserves. In chemical engineering education the integrative systems frame for process design, control and operations is virtually absent. Its application potential in process industry lags relative to academic research progress and results. This work aims to provide an informative industrial and academic perspective on PSE (focused on the European region), arguing that the ‘systems thinking’ and ‘systems problem solving’ have to be given priority over just applications of computational problem solving methods. A multi-level view of the PSE field is provided within the academic and industrial context, and enhancements for PSE are suggested at their industrial and academic interfaces to create win-win situations.","Education; Industry; Interface; Perspectives; Process systems engineering; Research","en","journal article","","","","","","","","2021-04-10","","","ChemE/Product and Process Engineering","","",""
"uuid:aa2e1449-2479-4486-b839-33123f4c4c3d","http://resolver.tudelft.nl/uuid:aa2e1449-2479-4486-b839-33123f4c4c3d","Impact of spacing and pruning on quantity, quality and economics of Douglas-fir sawn timber: scenario and sensitivity analysis","Rais, Andreas (Technische Universität München); Poschenrieder, Werner (Technische Universität München); van de Kuilen, J.W.G. (TU Delft Bio-based Structures & Materials; Holzforschung München); Pretzsch, Hans (Technische Universität München)","","2020","Controlling the long-term effect of management on the quantity and properties of individual boards is a fundamental challenge for silviculture. Within this basic study on Douglas-fir, we have investigated the sensitivity of the net present value (NPV) to three most common planting densities and a prominent pruning strategy. We therefore have applied an individual tree growth model, which represents intrinsic stem structure as a result of crown competition. The model extrapolated board strength development to the rotational age of 70 years, starting from real and comprehensive data recorded from experimental Douglas-fir plots at the age of 20 years. Total volume production increased from about 1600 m3 ha−1 for 1000 and 2000 trees ha−1 to 1800 m3 ha−1 for 4000 trees ha−1. The economic superiority of the lowest density stands increased considering the NPV at inflation-adjusted interest rates of 0%, 2% and 4%: Given an interest rate of 2% and no pruning, the NPV at 2000 was at about 50% of the one at 1000 trees ha−1. The NPV at 4000 trees ha−1 was even negative. Generally, artificial pruning was not effective. The revealed financial trade-off between growth and timber quality in young stands underlines the importance of silvicultural guidelines, which quantify the effect of management on yield per strength class and financial outcome.","Management tool; Processing chain; Strength grading; Timber quality","en","journal article","","","","","","","","","","","Bio-based Structures & Materials","","",""
"uuid:b8af1a43-1bb2-44e3-8e8d-0c011b1a0675","http://resolver.tudelft.nl/uuid:b8af1a43-1bb2-44e3-8e8d-0c011b1a0675","Path-space moderate deviations for a class of Curie–Weiss models with dissipation","Collet, F. (TU Delft Applied Probability; Università degli Studi di Padova); Kraaij, R.C. (TU Delft Applied Probability)","","2020","We modify the spin-flip dynamics of the Curie–Weiss model with dissipation in Dai Pra, Fischer and Regoli (2013) by considering arbitrary transition rates and we analyze the phase-portrait as well as the dynamics of moderate fluctuations for macroscopic observables. We obtain path-space moderate deviation principles via a general analytic approach based on the convergence of non-linear generators and uniqueness of viscosity solutions for associated Hamilton–Jacobi equations. The moderate asymptotics depend crucially on the phase we are considering and, moreover, their behavior may be influenced by the choice of the rates.","Bifurcation of periodic orbits; Hamilton–Jacobi equation; Interacting particle systems; Mean-field interaction; Moderate deviations; Perturbation theory for Markov processes","en","journal article","","","","","","Accepted author manuscript","","2022-05-25","","","Applied Probability","","",""
"uuid:c230fa2a-307c-4667-bd10-43b9708075fd","http://resolver.tudelft.nl/uuid:c230fa2a-307c-4667-bd10-43b9708075fd","Path-space moderate deviations for a Curie-Weiss model of self-organized criticality","Collet, F. (TU Delft Applied Probability; Università degli Studi di Padova); Gorny, Matthias (Laboratoire de Mathématiques d'Orsay); Kraaij, R.C. (TU Delft Applied Probability)","","2020","The dynamical Curie-Weiss model of self-organized criticality (SOC) was introduced in (Ann. Inst. Henri Poincaré Probab. Stat. 53 (2017) 658-678) and it is derived from the classical generalized Curie-Weiss by imposing a microscopic Markovian evolution having the distribution of the Curie-Weiss model of SOC (Ann. Probab. 44 (2016) 444-478) as unique invariant measure. In the case of Gaussian single-spin distribution, we analyze the dynamics of moderate fluctuations for the magnetization. We obtain a path-space moderate deviation principle via a general analytic approach based on convergence of non-linear generators and uniqueness of viscosity solutions for associated Hamilton-Jacobi equations. Our result shows that, under a peculiar moderate space-time scaling and without tuning external parameters, the typical behavior of the magnetization is critical.","Hamilton-Jacobi equation; Interacting particle systems; Mean-field interaction; Moderate deviations; Perturbation theory for Markov processes; Self-organized criticality","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:1ba53c72-a349-42b5-bba3-5018ce60203d","http://resolver.tudelft.nl/uuid:1ba53c72-a349-42b5-bba3-5018ce60203d","Wafer-scale transfer-free process of multi-layered graphene grown by chemical vapor deposition","Ricciardella, F. (TU Delft Electronic Components, Technology and Materials; Universität der Bundeswehr München); Vollebregt, S. (TU Delft Electronic Components, Technology and Materials); Boshuizen, B. (TU Delft ChemE/O&O groep); Danzl, F.J.K. (ECN part of TNO); Cesar, Ilkay (ECN part of TNO); Spinelli, Pierpaolo (ECN part of TNO); Sarro, Pasqualina M (TU Delft Electronic Components, Technology and Materials)","","2020","Chemical vapour deposition (CVD) has emerged as the dominant technique to combine high quality with large scale production of graphene. The key challenge for CVD graphene remains the transfer of the film from the growth substrate to the target substrate while preserving the quality of the material. Avoiding the transfer process of single or multi-layered graphene (SLG-MLG) has recently garnered much more interest. Here we report an original method to obtain a 4-inch wafer fully covered by MLG without any transfer step from the growth substrate. We prove that the MLG is completely released on the oxidized silicon wafer. A hydrogen peroxide solution is used to etch the molybdenum layer, used as a catalyst for the MLG growth via CVD. X-ray photoelectron spectroscopy proves that the layer of Mo is etched away and no residues of Mo are trapped beneath MLG. Terahertz transmission near-field imaging as well as Raman spectroscopy and atomic force microscopy show the homogeneity of the MLG film on the entire wafer after the Mo layer etch. These results mark a significant step forward for numerous applications of SLG-MLG on wafer scale, ranging from micro/nano-fabrication to solar cells technology.","chemical vapor deposition; large-area synthesis; multi-layered graphene; polymer-free transfer medium; transfer-free process","en","journal article","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:c08f4a79-c5fa-4b9f-9ef5-65b212fdf8f4","http://resolver.tudelft.nl/uuid:c08f4a79-c5fa-4b9f-9ef5-65b212fdf8f4","Co-deposition of silica and proteins at the interface between two immiscible electrolyte solutions","Poltorak, L. (TU Delft OLD ChemE/Organic Materials and Interfaces; Uniwersytet Lodzki); van der Meijden, Nienke (Student TU Delft); Skrzypek, Sławomira (Uniwersytet Lodzki); Sudhölter, Ernst J. R. (TU Delft ChemE/Advanced Soft Matter); de Puit, M. (TU Delft ChemE/Advanced Soft Matter; Netherlands Forensic Institute - NFI)","","2020","In this work, we have simultaneously examined, electrochemically driven deposition of three proteins (haemoglobin, acid phosphatase, and α-amylase) and silica films at a polarized liquid–liquid interface. The interfacial adsorption of the proteins occurs efficiently within the acidic pH range (pH = 2–4). The interfacial charge transfer reactions recorded in the presence of fully positivity charged macromolecules were followed with cyclic voltammetry on the positive side of the potential window. Faradaic currents attributed to the presence of proteins in the aqueous phase appeared for concentrations equal to ca. 0.1 µM for haemoglobin and acid phosphatase and ca. 1 µM for the α-amylase. Concomitant deposition of silica films was achieved via the addition of tetraethoxysilane molecules to the organic phase (1,2-dichloroethane). The hydrolysis and condensation reactions of tetraethoxysilane were controlled via the interfacial transfer of H+ coinciding with the potential for protein adsorption. The effect of tetraethoxysilane concentration – up to 50% by volume – revealed significant shrinkage of the potential window (the region where capacitive currents are recorded). The optimized platform was then used to prepare silica-proteins co-deposits. These could be easily collected from the interface and further analyzed with infrared spectroscopy and transmission electron microscopy.","Acid phosphatase; Electrified liquid-liquid interface; Haemoglobin; Interfacial deposition; The sol-gel process of silica; α-amylase","en","journal article","","","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:c3ca5357-47f9-4d67-951e-dd3c201472ef","http://resolver.tudelft.nl/uuid:c3ca5357-47f9-4d67-951e-dd3c201472ef","Trapping and Detrapping in Colloidal Perovskite Nanoplatelets: Elucidation and Prevention of Nonradiative Processes through Chemical Treatment","Vonk, Sander J.W. (Universiteit Utrecht); Fridriksson, M.B. (Universiteit Utrecht); Hinterding, Stijn O.M. (Universiteit Utrecht); Mangnus, Mark J.J. (Universiteit Utrecht); Van Swieten, Thomas P. (Universiteit Utrecht); Grozema, F.C. (TU Delft ChemE/Opto-electronic Materials); Rabouw, Freddy T. (Debye Institute); van der Stam, W. (TU Delft ChemE/Opto-electronic Materials)","","2020","Metal-halide perovskite nanocrystals show promise as the future active material in photovoltaics, lighting, and other optoelectronic applications. The appeal of these materials is largely due to the robustness of the optoelectronic properties to structural defects. The photoluminescence quantum yield (PLQY) of most types of perovskite nanocrystals is nevertheless below unity, evidencing the existence of nonradiative charge-carrier decay channels. In this work, we experimentally elucidate the nonradiative pathways in CsPbBr3 nanoplatelets, before and after chemical treatment with PbBr2 that improves the PLQY. A combination of picosecond streak camera and nanosecond time-correlated single-photon counting measurements is used to probe the excited-state dynamics over 6 orders of magnitude in time. We find that up to 40% of the nanoplatelets from a synthesis batch are entirely nonfluorescent and cannot be turned fluorescent through chemical treatment. The other nanoplatelets show fluorescence, but charge-carrier trapping leads to losses that are prevented by chemical treatment. Interestingly, even without chemical treatment, some losses due to trapping are mitigated because trapped carriers spontaneously detrap on nanosecond-to-microsecond timescales. Our analysis shows that multiple nonradiative pathways are active in perovskite nanoplatelets, which are affected differently by chemical treatment with PbBr2. More generally, our work highlights that in-depth studies using a combination of techniques are necessary to understand nonradiative pathways in fluorescent nanocrystals. Such understanding is essential to optimize synthesis and treatment procedures.","Quantum yield; Excitons; Physical chemical processes; Electrical conductivity; Recombination","en","journal article","","","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:e5b017f6-46e2-447a-bb32-09b249963ab0","http://resolver.tudelft.nl/uuid:e5b017f6-46e2-447a-bb32-09b249963ab0","The SPPD-WRF framework: A novel and holistic methodology for strategical planning and process design of water resource factories","Kehrein, P.A. (TU Delft BT/Biotechnology and Society); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Osseweijer, P. (TU Delft BT/Biotechnology and Society); Posada Duque, J.A. (TU Delft BT/Biotechnology and Society); Dewulf, Jo (Universiteit Gent)","","2020","This paper guides decision making in more sustainable urban water management practices that feed into a circular economy by presenting a novel framework for conceptually designing and strategically planning wastewater treatment processes from a resource recovery perspective. Municipal wastewater cannot any longer be perceived as waste stream because a great variety of technologies are available to recover water, energy, fertilizer, and other valuable products from it. Despite the vast technological recovery possibilities, only a few processes have yet been implemented that deserve the name water resource factory instead of wastewater treatment plant. This transition relies on process designs that are not only technically feasible but also overcome various non-technical bottlenecks. A multidimensional and multidisciplinary approach is needed to design water resource factories (WRFs) in the future that are technically feasible, cost effective, show low environmental impacts, and successfully market recovered resources. To achieve that, the wastewater treatment plant (WWTP) design space needs to be opened up for a variety of expertise that complements the traditional wastewater engineering domain. Implementable WRF processes can only be designed if the current design perspective, which is dominated by the fulfilment of legal euent qualities and process costs, is extended to include resource recovery as an assessable design objective from an early stage on. Therefore, the framework combines insights and methodologies from different fields and disciplines beyond WWTP design like, e.g., circular economy, industrial process engineering, project management, value chain development, and environmental impact assessment. It supports the transfer of the end-of-waste concept into the wastewater sector as it structures possible resource recovery activities according to clear criteria. This makes recovered resources more likely to fulfil the conditions of the end-of-waste concept and allows the change in their definition from wastes to full-fledged products.","Circular economy; Conceptual process design; Cost-benefit analysis; Multiple-criteria decision making; Resource recovery; Sustainability assessment; Sustainable urban development; Urban water management; Wastewater treatment; Water resource factories","en","journal article","","","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:c0e8b361-7bb2-4da5-946c-3f2f946d19f3","http://resolver.tudelft.nl/uuid:c0e8b361-7bb2-4da5-946c-3f2f946d19f3","Change in low flows due to catchment management dynamics—Application of a comparative modelling approach","Gebremicael, T.G. (TU Delft Water Resources; IHE Delft Institute for Water Education; Tigray Agricultural Research Institute); van der Zaag, P. (TU Delft Water Resources; IHE Delft Institute for Water Education); Abbas Mohamedali, Y. (TU Delft Water Resources; IHE Delft Institute for Water Education; Hydraulic Research Station); Hagos, Eyasu Y. (Mekelle University)","","2020","Understanding the natural low flow of a catchment is critical for effective water management policy in semi-arid and arid lands. The Geba catchment in Ethiopia, forming the headwaters of Tekeze-Atbara basin was known for its severe land degradation before the recent large scale Soil and Water conservation (SWC) programs. Such interventions can modify the hydrological processes by changing the partitioning of the incoming rainfall on the land surface. However, the literature lacks studies to quantify the hydrological impacts of these interventions in the semi-arid catchments of the Nile basin. Statistical test and Indicators of Hydrological Alteration (IHA) were used to identify the trends of streamflow in two comparatives adjacent (one treated with intensive SWC intervention and control with fewer interventions) catchments. A distributed hydrological model was developed to understand the differences in hydrological processes of the two catchments. The statistical and IHA tools showed that the low flow in the treated catchment has significantly increased while considerably decreased in the control catchment. Comparative analysis confirmed that the low flow in the catchment with intensive SWC works was greater than that of the control by >30% while the direct runoff was lower by >120%. This implies a large proportion of the rainfall in the treated catchment is infiltrated and recharge aquifers which subsequently contribute to streamflow during the dry season. The proportion of soil storage was more than double compared to the control catchment. Moreover, hydrological response comparison from pre- and post-intervention showed that a drastic reduction in direct runoff (>84%) has improved the low flow by >55%. This strongly suggests that the ongoing intensive SWC works have significantly improved the low flows while it contributed to the reduction of total streamflow in the catchment.","catchment management; Geba catchment; hydrological processes; low flow; soil and water conservation; Tekeze-Atbara River basin","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:b42b9e88-d083-4135-bf0e-e4142845a309","http://resolver.tudelft.nl/uuid:b42b9e88-d083-4135-bf0e-e4142845a309","Scalable distributed sensor fault diagnosis for smart buildings","Papadopoulos, Panayiotis M. (University of Cyprus); Reppa, V. (TU Delft Transport Engineering and Logistics); Polycarpou, Marios M. (University of Cyprus); Panayiotou, Christos G. (University of Cyprus)","","2020","The enormous energy use of the building sector and the requirements for indoor living quality that aim to improve occupants'productivity and health, prioritize Smart Buildings as an emerging technology. The Heating, Ventilation and Air-Conditioning ( HVAC ) system is considered one of the most critical and essential parts in buildings since it consumes the largest amount of energy and is responsible for humans comfort. Due to the intermittent operation of HVAC systems, faults are more likely to occur, possibly increasing eventually building's energy consumption and - or downgrading indoor living quality. The complexity and large scale nature of HVAC systems complicate the diagnosis of faults in a centralized framework. This paper presents a distributed intelligent fault diagnosis algorithm for detecting and isolating multiple sensor faults in large-scale HVAC systems. Modeling the HVAC system as a network of interconnected subsystems allows the design of a set of distributed sensor fault diagnosis agents capable of isolating multiple sensor faults by applying a combinatorial decision logic and diagnostic reasoning. The performance of the proposed method is investigated with respect to robustness, fault detectability and scalability. Simulations are used to illustrate the effectiveness of the proposed method in the presence of multiple sensor faults applied to a 83-zone HVAC system and to evaluate the sensitivity of the method with respect to sensor noise variance.","Fault diagnosis; HVAC; Buildings; Autoregressive processes; Analytical models; Heat pumps; Water heating","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-10-28","","","Transport Engineering and Logistics","","",""
"uuid:018fa4b8-9f9f-4c82-ba05-8d2d9059723c","http://resolver.tudelft.nl/uuid:018fa4b8-9f9f-4c82-ba05-8d2d9059723c","Improved target illumination at Ludvika mines of Sweden through seismic-interferometric surface-wave suppression","Balestrini, F.I. (TU Delft Applied Geophysics and Petrophysics); Draganov, D.S. (TU Delft Applied Geophysics and Petrophysics); Malehmir, Alireza (Uppsala University); Marsden, Paul (NIO (Nordic Iron Ore AB)); Ghose, R. (TU Delft Applied Geophysics and Petrophysics)","","2020","In mineral exploration, new methods to improve the delineation of ore deposits at depth are in demand. For this purpose, increasing the signal-to-noise ratio through suitable data processing is an important requirement. Seismic reflection methods have proven to be useful to image mineral deposits. However, in most hard rock environments, surface waves constitute the most undesirable source-generated or ambient noise in the data that, especially given their typical broadband nature, often mask the events of interest like body-wave reflections and diffractions. In this study, we show the efficacy of a two-step procedure to suppress surface waves in an active-source reflection seismic dataset acquired in the Ludvika mining area of Sweden. First, we use seismic interferometry to estimate the surface-wave energy between receivers, given that they are the most energetic arrivals in the dataset. Second, we adaptively subtract the retrieved surface waves from the original shot gathers, checking the quality of the unveiled reflections. We see that several reflections, judged to be from the mineralization zone, are enhanced and better visualized after this two-step procedure. Our comparison with results from frequency-wavenumber filtering verifies the effectiveness of our scheme, since the presence of linear artefacts is reduced. The results are encouraging, as they open up new possibilities for denoising hard rock seismic data and, in particular, for imaging of deep mineral deposits using seismic reflections. This approach is purely data driven and does not require significant judgment on the dip and frequency content of present surface waves, which often vary from place to place.","Data processing; Ludvika mines; Seismic Interferometry; Seismics; Surface waves","en","journal article","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:7d162d6b-2b0b-438d-8ba4-94d26bff0a44","http://resolver.tudelft.nl/uuid:7d162d6b-2b0b-438d-8ba4-94d26bff0a44","Disposal and recycle economic assessment for aircraft and engine end of life solution evaluation","Zhao, X. (TU Delft Air Transport & Operations; Northwestern Polytechnical University); Verhagen, W.J.C. (TU Delft Air Transport & Operations; Royal Melbourne Institute of Technology University); Curran, R. (TU Delft Air Transport & Operations)","","2020","The present study proposes an economic indicator to support the evaluation of aircraft End of Life (EoL) strategies in view of the increasing demand with regards to aircraft decommissioning. This indicator can be used to evaluate the economic performance and to facilitate the trade-off studies among different strategies. First, Disposal and Recycle (D&R) scenarios related to stakeholders are investigated to identify the core concepts for the economic evaluation. Next, we extracted the aircraft D&R process from various real-life practices. In order to obtain the economic measure for the engineering process, a method of estimating the D&R cost and values are developed by integrating product, process and cost properties. This analysis is demonstrated on an averaged data set and two EoL aircraft cases. In addition, sensitivity analysis is performed to evaluate the impact of the D&R cost, residual value, and salvage value. Results show that the disassembly and dismantling of an aircraft engine possesses relatively more economic gains than that for the aircraft. The main factors influencing the proposed D&R economic indicator are the salvage value and D&R cost for economically efficient D&R cases. In addition, delaying the disposal and recycle process for EoL aircraft can lead to economically unfavorable solutions. The economic indicator combined with the evaluation methods is widely applicable for evaluations of engineering products EoL solutions, and implies a significant contribution of this research to decision making for such complex systems in terms sustainable policy.","Aircraft and aircraft engine Life Cycle Analysis (LCA); Aircraft disposal and recycle process; Disposal and recycle economic indicator; End of Life (EoL); Engineering cost and value analysis","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:1d7eae95-3bad-48ae-8464-ff9174dd1470","http://resolver.tudelft.nl/uuid:1d7eae95-3bad-48ae-8464-ff9174dd1470","Transparent silicon carbide/tunnel SiO2 passivation for c-Si solar cell front side: Enabling Jsc > 42 mA/cm2 and iVoc of 742 mV","Pomaska, Manuel (Forschungszentrum Jülich GmbH); Köhler, Malte (Forschungszentrum Jülich GmbH); Procel Moya, P.A. (TU Delft Photovoltaic Materials and Devices); Zamchiy, Alexandr (Russian Academy of Sciences); Singh, Aryak (Forschungszentrum Jülich GmbH); Kim, Do Yun (Forschungszentrum Jülich GmbH); Isabella, O. (TU Delft Photovoltaic Materials and Devices); Zeman, M. (TU Delft Electrical Sustainable Energy); Li, Shenghao (Forschungszentrum Jülich GmbH; Sun Yat-sen University)","","2020","N-type microcrystalline silicon carbide (μc-SiC:H(n)) is a wide bandgap material that is very promising for the use on the front side of crystalline silicon (c-Si) solar cells. It offers a high optical transparency and a suitable refractive index that reduces parasitic absorption and reflection losses, respectively. In this work, we investigate the potential of hot wire chemical vapor deposition (HWCVD)–grown μc-SiC:H(n) for c-Si solar cells with interdigitated back contacts (IBC). We demonstrate outstanding passivation quality of μc-SiC:H(n) on tunnel oxide (SiO2)–passivated c-Si with an implied open-circuit voltage of 742 mV and a saturation current density of 3.6 fA/cm2. This excellent passivation quality is achieved directly after the HWCVD deposition of μc-SiC:H(n) at 250°C heater temperature without any further treatments like recrystallization or hydrogenation. Additionally, we developed magnesium fluoride (MgF2)/silicon nitride (SiNx:H)/silicon carbide antireflection coatings that reduce optical losses on the front side to only 0.47 mA/cm2 with MgF2/SiNx:H/μc-SiC:H(n) and 0.62 mA/cm2 with MgF2/μc-SiC:H(n). Finally, calculations with Sentaurus TCAD simulation using MgF2/μc-SiC:H(n)/SiO2/c-Si as front side layer stack in an IBC solar cell reveal a short-circuit current density of 42.2 mA/cm2, an open-circuit voltage of 738 mV, a fill factor of 85.2% and a maximum power conversion efficiency of 26.6%.","antireflecting coating; excellent passivation; heterojunction; hot wire CVD; lean process; refractive index; silicon carbide; tunnel oxide","en","journal article","","","","","","","","","","Electrical Sustainable Energy","Photovoltaic Materials and Devices","","",""
"uuid:fdb305c4-a122-4bc8-b9ce-ae6de177db99","http://resolver.tudelft.nl/uuid:fdb305c4-a122-4bc8-b9ce-ae6de177db99","Biotechnology for gas-to-liquid (GTL)wastewater treatment: A review","Surkatti, Riham (Qatar University); El-Naas, Muftah H. (Qatar University); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Benamor, Abdelbaki (Qatar University); Al-Naemi, Fatima (Qatar University); Onwusogh, Udeogu (Qatar Shell RTC)","","2020","Gas-to-liquid (GTL) technology involves the conversion of natural gas into several liquid hydrocarbon products. The Fischer-Tropsch (F-T) process is the most widely applied approach for GTL, and it is the main source of wastewater in the GTL process. The wastewater is generally characterized by high chemical oxygen demand (COD) and total organic carbon (TOC) content due to the presence of alcohol, ketones and organic acids. The discharge of this highly contaminated wastewater without prior treatment can cause adverse effects on human life and aquatic systems. This review examines aerobic and anaerobic biological treatment methods that have been shown to reduce the concentration of COD and organic compounds in wastewater. Advanced biological treatment methods, such as cell immobilization and application of nanotechnology are also evaluated. The removal of alcohol and volatile fatty acids (VFA) from GTL wastewater can be achieved successfully under anaerobic conditions. However, the combination of anaerobic systems with aerobic biodegradation processes or chemical treatment processes can be a viable technology for the treatment of highly contaminated GTL wastewater with high COD concentration. The ultimate goal is to have treated wastewater that has good enough quality to be reused in the GTL process, which could lead to cost reduction and environmental benefits.","Biological treatment; Biomass immobilization; Fischer-tropsch (F-T) process; Nanoparticles","en","review","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:53a3460e-d4b0-4e30-9240-3f899d9c0bc0","http://resolver.tudelft.nl/uuid:53a3460e-d4b0-4e30-9240-3f899d9c0bc0","Unsupervised Feature Transfer for Batch Process Based on Geodesic Flow Kernel","Zhang, Zheming (Taiyuan University of Technology); Wang, Fang (Taiyuan University of Technology); Pang, Y. (TU Delft Transport Engineering and Logistics); Yan, Gaowei (Taiyuan University of Technology)","","2020","The problem of misalignment of the original measurement model is caused by nonlinear, time-varying characteristic of the batch process. In this paper, a method based on geodesic flow kernel (GFK) for feature transfer is proposed. By mapping data into the manifold space, the feature transfer from source domain to target domain is implemented. Distribution adaptation of real-time data and modeling data is performed to reduce the distribution difference between them. The historical data through distribution adaptation is used to establish a regression model to predict the real-time data, by which the unsupervised batch process soft sensor modeling is realized. The application of predicting the concentration of penicillin between different batches during the fermentation of penicillin demonstrated that the prediction accuracy of the model can be improved more effectively than the traditional soft sensor method.","Batch process; feature transfer; geodesic flow kernel; penicillin; unsupervised","en","conference paper","IEEE","","","","","Accepted Author Manuscript","","","","","Transport Engineering and Logistics","","",""
"uuid:0538d300-b3f7-447a-8bbd-421a959a67bf","http://resolver.tudelft.nl/uuid:0538d300-b3f7-447a-8bbd-421a959a67bf","Modeling Static Noise Margin for FinFET based SRAM PUFs","Masoumian, S. (TU Delft Computer Engineering; Intrinsic ID B.V.); Selimis, Georgios (Intrinsic ID B.V.); Maes, Roel (Intrinsic ID B.V.); Schrijen, Geert-Jan (Intrinsic ID B.V.); Hamdioui, S. (TU Delft Quantum & Computer Engineering); Taouil, M. (TU Delft Computer Engineering)","","2020","In this paper, we develop an analytical PUF model based on a compact FinFET transistor model that calculates the PUF stability (i.e. PUF static noise margin (PSNM)) for FinFET based SRAMs. The model enables a quick design space exploration and may be used to identify critical parameters that affect the PSNM. The analytical model is validated with SPICE simulations. In our experiments, we analyze the impact of process variation, technology, and temperature on the PSNM. The results show that the analytical model matches very well with the simulation model. From the experiments we conclude the following: (1) nFET variations have a larger impact on the PSNM than pFET (1.5% higher PSNM in nFET variations than pFET variations at 25°C), (2) high performance SRAM cells are more skewed (1.3% higher PSNM) (3) the reproducibility increases with smaller technology nodes (0.8% PSNM increase from 20 to 14 nm) (4) increasing the temperature from-10°C to 120°C leads to a PSNM change of approximately 1.0% for an extreme nFET channel length.","FinFET; process variation; SRAM PUF; Static noise margin; temperature","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:5d966e85-70eb-4971-b8b4-047debe127aa","http://resolver.tudelft.nl/uuid:5d966e85-70eb-4971-b8b4-047debe127aa","Observing and tracking bandlimited graph processes from sampled measurements","Isufi, E. (TU Delft Multimedia Computing); Banelli, Paolo (University of Perugia); Di Lorenzo, Paolo (Sapienza University of Rome); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","A critical challenge in graph signal processing is the sampling of bandlimited graph signals; signals that are sparse in a well-defined graph Fourier domain. Current works focused on sampling time-invariant graph signals and ignored their temporal evolution. However, time can bring new insights on sampling since sensor, biological, and financial network signals are correlated in both domains. Hence, in this work, we develop a sampling theory for time varying graph signals, named graph processes, to observe and track a process described by a linear state-space model. We provide a mathematical analysis to highlight the role of the graph, process bandwidth, and sample locations. We also propose sampling strategies that exploit the coupling between the topology and the corresponding process. Numerical experiments corroborate our theory and show the proposed methods trade well the number of samples with accuracy.","Graph processes; Graph signal processing; Kalman filtering; Observability; Sampling on graphs; Time varying graph signals","en","journal article","","","","","","","","","","","Multimedia Computing","","",""
"uuid:a2d0a20e-ddd2-4f6f-95eb-780f7ff4d005","http://resolver.tudelft.nl/uuid:a2d0a20e-ddd2-4f6f-95eb-780f7ff4d005","Fast spectral approximation of structured graphs with applications to graph filtering","Coutino, Mario (TU Delft Signal Processing Systems); Chepuri, Sundeep Prabhakar (Indian Institute of Science); Maehara, Takanori (RIKEN Center for Advanced Intelligence Project); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2020","To analyze and synthesize signals on networks or graphs, Fourier theory has been extended to irregular domains, leading to a so-called graph Fourier transform. Unfortunately, different from the traditional Fourier transform, each graph exhibits a different graph Fourier transform. Therefore to analyze the graph-frequency domain properties of a graph signal, the graph Fourier modes and graph frequencies must be computed for the graph under study. Although to find these graph frequencies and modes, a computationally expensive, or even prohibitive, eigendecomposition of the graph is required, there exist families of graphs that have properties that could be exploited for an approximate fast graph spectrum computation. In this work, we aim to identify these families and to provide a divide-and-conquer approach for computing an approximate spectral decomposition of the graph. Using the same decomposition, results on reducing the complexity of graph filtering are derived. These results provide an attempt to leverage the underlying topological properties of graphs in order to devise general computational models for graph signal processing.","Approximate graph Fourier transform; Divide-and-conquer; Graph Fourier transform; Graph signal processing","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:588a333a-1b65-442a-bf3a-e7dfe7a1ceec","http://resolver.tudelft.nl/uuid:588a333a-1b65-442a-bf3a-e7dfe7a1ceec","Enzymatic Hydrolysis of Sugarcane Bagasse in Aqueous Two-Phase Systems (ATPS): Exploration and Conceptual Process Design","Consorti Bussamra, B. (TU Delft BT/Bioprocess Engineering; University of Campinas); Meerman, Paulus (Student TU Delft); Viswanathan, V. (TU Delft BT/Design and Engineering Education); Mussatto, Solange I. (Technical University of Denmark); Carvalho da Costa, Aline (University of Campinas); van der Wielen, L.A.M. (TU Delft BT/Bioprocess Engineering; Bernal Institute); Ottens, M. (TU Delft BT/Bioprocess Engineering)","","2020","The enzymatic conversion of lignocellulosic material to sugars can provide a carbon source for the production of energy (fuels) and a wide range of renewable products. However, the efficiency of this conversion is impaired due to product (sugar) inhibition. Even though several studies investigate how to overcome this challenge, concepts on the process to conduct the hydrolysis are still scarce in literature. Aqueous two-phase systems (ATPS) can be applied to design an extractive reaction due to their capacity to partition solutes to different phases in such a system. This work presents strategies on how to conduct extractive enzymatic hydrolysis in ATPS and how to explore the experimental results in order to design a feasible process. While only a limited number of ATPS was explored, the methods and strategies described could easily be applied to any further ATPS to be explored. We studied two promising ATPS as a subset of a previously high throughput screened large set of ATPS, providing two configurations of processes having the reaction in either the top phase or in the bottom phase. Enzymatic hydrolysis in these ATPS was performed to evaluate the partitioning of the substrate and the influence of solute partitioning on conversion. Because ATPS are able to partition inhibitors (sugar) between the phases, the conversion rate can be maintained. However, phase forming components should be selected to preserve the enzymatic activity. The experimental results presented here contribute to a feasible ATPS-based conceptual process design for the enzymatic conversion of lignocellulosic material.","aqueous two-phase systems (ATPS); enzymatic hydrolysis; extractive process; product inhibition; sugarcane bagasse","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:d563d9cc-7296-42d9-9853-b2d8a5ae2756","http://resolver.tudelft.nl/uuid:d563d9cc-7296-42d9-9853-b2d8a5ae2756","An integrated methodology for the supply reliability analysis of multi-product pipeline systems under pumps failure","Zhou, X. (TU Delft Safety and Security Science; China University of Petroleum - Beijing); van Gelder, P.H.A.J.M. (TU Delft Safety and Security Science); Liang, Yongtu (China University of Petroleum - Beijing); Zhang, Haoran (University of Tokyo)","","2020","As the main way for the long-distance transportation of refined products, multi-products pipelines are of vital importance to the regional energy security. The supply reliability evaluation of multi-product pipeline systems can improve the effective response to unexpected disruptions and guarantee the reliable oil supply. Based on reliability theory and pipeline scheduling method, an integrated supply reliability evaluation methodology for multi-product pipeline systems is proposed in this paper and the pumps failure, of which influence is the most complex, is focused on. In the methodology, the discrete-time Markov process is adopted to describe the stochastic failure and the Monte Carlo method is used to simulate the system states transition. With the pipeline flowrate upper limits under various pumps failure scenarios optimized in advance, the maximum supply capacity to the downstream markets in each trial is calculated by the pipeline scheduling model. Three indicators are also developed to analyze the pipeline supply reliability from the holistic and individual perspectives. At last, the methodology application is performed on a real-world multi-product pipeline system in China and the supply reliability is analyzed in detail according to the simulation results. It is proved to provide a practical method for the emergency response decision-making and loss prevention.","Evaluation indicators; Multi-product pipeline; Pipeline scheduling method; Pumps failure; Stochastic process simulation; Supply reliability analysis","en","journal article","","","","","","","","2022-08-22","","","Safety and Security Science","","",""
"uuid:63f3450f-dace-45a9-88d1-54fc4ecfc115","http://resolver.tudelft.nl/uuid:63f3450f-dace-45a9-88d1-54fc4ecfc115","Mathematical morphology directly applied to point cloud data","Balado, Jesús (University of Vigo); van Oosterom, P.J.M. (TU Delft GIS Technologie); Díaz-Vilarino, L. (TU Delft GIS Technologie; University of Vigo); Meijers, B.M. (TU Delft GIS Technologie)","","2020","Many of the point cloud processing techniques have their origin in image processing. But mathematical morphology, despite being one of the most used image processing techniques, has not yet been clearly adapted to point clouds. The aim of this work is to design the basic operations of mathematical morphology applicable to 3D point cloud data, without the need to transform point clouds to 2D or 3D images and avoiding the associated problems of resolution loss and orientation restrictions. The object shapes in images, based on pixel values, are assumed to be the existence or absence of points, therefore, morphological dilation and erosion operations are focused on the addition and removal of points according to the structuring element. The structuring element, in turn, is defined as a point cloud with characteristics of shape, size, orientation, point density, and one reference point. The designed method has been tested on point clouds artificially generated, acquired from real case studies, and the Stanford bunny model. The results show a robust behaviour against point density variations and consistent with image processing equivalent. The proposed method is easy and fast to implement, although the selection of a correct structuring element requires previous knowledge about the problem and the input point cloud. Besides, the proposed method solves well-known point cloud processing problems such as object detection, segmentation, and gap filling.","Detection; Image processing; LiDAR; Occlusion correction; Point cloud processing; Segmentation","en","journal article","","","","","","","","","","","GIS Technologie","","",""
"uuid:6b72eb78-2844-4d21-ae79-0cc5081afcff","http://resolver.tudelft.nl/uuid:6b72eb78-2844-4d21-ae79-0cc5081afcff","Cultural life: Theory and empirical testing","Baciu, D.C. (TU Delft History, Form & Aesthetics; University of California)","","2020","“What is life?” and Erwin Schrödinger's answer, “negative entropy”, inspired researchers in the 20th century to unite physics, chemistry, and physiology into a new synthesis that many believe to be an important foundation for life science today. Decades after Schrödinger, life scientists are still fascinated by the riddle that entropy can only accumulate in physical systems, which often leads to biological deterioration and death, but life finds ways to persist and prevail. So to say, life “negates” entropy. Can this fascination and research concept be broadened even further to human culture? Short after Schrödinger's publication, Claude Shannon coined the term “information entropy.” Information entropy accumulates when noise interferes during communication. Eventually, all useful information is lost. Yet, from this observation, something surprising can be inferred. Not only biological life but also cultural life has the ability to persist and prevail in spite of the accumulation of entropy. Does this insight mean that cultural life also negates entropy, in Schrödinger's sense? These questions guided me over several years of research during which I developed and tested a new theory of culture based on variation-selection processes and homeostatic regulation. My contribution is to discover that these two processes not only make statements about biological life. They also explain some of the most important phenomena of culture: returning fashions, polarization, diversification, cycles of growth and reform, and the formation of common ethos across entire bodies of knowledge. With access to big data and supercomputing, I tested my theory against hundreds of thousands of news, magazine articles, books, and TV transcripts as well as textual content collected from the social media. Historical, institutional, and geographical information was extracted from these data using a new method; and new interactive tools were created to interpret the results. What should not be missed when reading this article is that the theory proposed here reveals a striking equivalence between nature and culture. The article states this equivalence in mathematical terms, and contextualizes it in the history of science. The mathematical breakthrough is relevant because it aligns the humanities to science while also allowing for live evaluation of what I call “cultural diversification cycles.”","Constructal law; Homeostatic regulation; Negative entropy; Quasispecies equation; Shannon communication; Variation-selection processes","en","journal article","","","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:14f6b176-6a7f-4dd2-9e52-a7d4ef17b7f3","http://resolver.tudelft.nl/uuid:14f6b176-6a7f-4dd2-9e52-a7d4ef17b7f3","Optimality and Limitations of Audio-Visual Integration for Cognitive Systems","Boyce, William Paul (Ulster University); Lindsay, Anthony (Ulster University); Zgonnikov, A. (TU Delft Human-Robot Interaction); Rañó, Iñaki (Ulster University); Wong-Lin, Kong Fatt (Ulster University)","","2020","Multimodal integration is an important process in perceptual decision-making. In humans, this process has often been shown to be statistically optimal, or near optimal: sensory information is combined in a fashion that minimizes the average error in perceptual representation of stimuli. However, sometimes there are costs that come with the optimization, manifesting as illusory percepts. We review audio-visual facilitations and illusions that are products of multisensory integration, and the computational models that account for these phenomena. In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artifacts in artificial cognitive systems. We provide cautionary considerations when designing artificial cognitive systems with the view of avoiding such artifacts. Finally, we suggest avenues of research toward solutions to potential pitfalls in system design. We conclude that detailed understanding of multisensory integration and the mechanisms behind audio-visual illusions can benefit the design of artificial cognitive systems.","audio-visual illusions; Bayesian integration; cognitive systems; multi-modal processing; multisensory integration; optimality","en","review","","","","","","","","","","","Human-Robot Interaction","","",""
"uuid:345ab8ad-c10f-499a-b102-5bfbe41d254e","http://resolver.tudelft.nl/uuid:345ab8ad-c10f-499a-b102-5bfbe41d254e","Water Flow Behavior and Storage Potential of the Semi-Arid Ephemeral River System in the Mara Basin of Kenya","Wekesa, Sospeter Simiyu (IHE Delft Institute for Water Education; Technical University of Kenya); Stigter, Tibor Yvan (IHE Delft Institute for Water Education); Olang, Luke O. (Technical University of Kenya); Oloo, Francis (Technical University of Kenya); Fouchy, Kelly (IHE Delft Institute for Water Education); McClain, M.E. (TU Delft Water Resources; IHE Delft Institute for Water Education)","","2020","Alluvial corridors of ephemeral river systems provide viable opportunities for natural water storage in dry lands. Whilst alluvial corridors are widely recognized as water buffers, particularly for areas experiencing constant water scarcity, little research has been undertaken in Sub-Saharan Africa to explore their hydrological variability and water resource potential as alternative water sources for nearby communities. This study investigated the water flow behavior and storage potential of an ephemeral river system in the Mara Basin of Kenya for purposes of supporting water resources development and ecological sustainability. The water flow processes – including the recharge rates and water loss processes – from existing sand storage systems were established through monitoring of ground and surface water levels. Water samples along the alluvial corridor were collected and analyzed for major ions and isotopic signatures required to establish the water storage dynamics. The storage potential was estimated through Probing and Electrical Resistivity Tomography techniques, augmented with in-situ measurements of hydraulic conductivities and channel bed porosities. The mean annual storage volume in the alluvium of the study reach was estimated at 1.1 Mm3, potentially capable of providing for the annual domestic and livestock water demands of the area. Transmission losses into the alluvium beneath the ephemeral channel-bed were noted to attenuate the flood peak discharges, depending on the level of saturation of the alluvial bed. However, water storage in the alluvium was subject to losses through evapotranspiration and seepage through fractured bedrocks. The study demonstrated the potential of alluvial corridors as water storage buffers providing alternative water sources to communities within the dry land regions with water scarcity, thereby to supporting ecosystem sustainability.","alluvial corridors; ephemeral river systems; Kenya; Mara River Basin; water flow processes; water storage potential","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:59110064-7964-4b1a-9dd3-eb19a3387007","http://resolver.tudelft.nl/uuid:59110064-7964-4b1a-9dd3-eb19a3387007","Towards an evolutionary-based approach for natural language processing","Manzoni, Luca (University of Trieste); Jakobovic, Domagoj (University of Zagreb); Mariot, L. (TU Delft Cyber Security); Picek, S. (TU Delft Cyber Security); Castelli, Mauro (New University of Lisbon)","","2020","Tasks related to Natural Language Processing (NLP) have recently been the focus of a large research endeavor by the machine learning community. The increased interest in this area is mainly due to the success of deep learning methods. Genetic Programming (GP), however, was not under the spotlight with respect to NLP tasks. Here, we propose a first proof-of-concept that combines GP with the well established NLP tool word2vec for the next word prediction task. The main idea is that, once words have been moved into a vector space, traditional GP operators can successfully work on vectors, thus producing meaningful words as the output. To assess the suitability of this approach, we perform an experimental evaluation on a set of existing newspaper headlines. Individuals resulting from this (pre-)training phase can be employed as the initial population in other NLP tasks, like sentence generation, which will be the focus of future investigations, possibly employing adversarial co-evolutionary approaches.","Genetic programming; Natural language processing; Next word prediction","en","conference paper","Association for Computing Machinery (ACM)","","","","","Accepted author manuscript","","","","","Cyber Security","","",""
"uuid:0f79ad47-3ddf-4048-9bba-201935f37c3c","http://resolver.tudelft.nl/uuid:0f79ad47-3ddf-4048-9bba-201935f37c3c","Electrical resistance tomography for control applications: Quantitative study of the gas-liquid distribution inside a cyclone","Sattar, Muhammad Awais (Lodz University of Technology); Martinez Garcia, M. (TU Delft ChemE/Transport Phenomena); Banasiak, Robert (Lodz University of Technology); Portela, L. (TU Delft ChemE/Transport Phenomena); Babout, Laurent (Lodz University of Technology)","","2020","Phase separation based centrifugal forces is effective, and thus widely explored by the process industry. In an inline swirl separator, a core of the light phase is formed in the center of the device and captured further downstream. Given the inlet conditions, this gas core created varies in shape and size. To predict the separation behavior and control the process in an optimal way, the gas core diameter should be measured with the minimum possible intrusiveness. Process tomography techniques such as electrical resistance tomography (ERT) allows us to measure the gas core diameter in a fast and non-intrusive way. Due to the soft-field nature and ill-posed problem in solving the inverse problem, especially in the area of low spatial resolution, the reconstructed images often overestimate the diameter of the object under consideration leading to unreliable measurements. To use ERT measurements as an input for the controller, the estimated diameters should be corrected based on secondary measurements, e.g., optical techniques such as high-speed cameras. In this context, image processing and image analysis techniques were adapted to compare the diameter calculated by an ERT system and a fast camera. In this paper, a correction method is introduced to correct the diameter obtained by ERT based on static measurements. The proposed method reduced the ERT error of dynamic measurements of the gas core size from over 300% to below 20%, making it a reliable sensing technique for controlled separation processes.","Digital image processing; Electrical resistance tomography (ERT); High-speed camera; Swirling two-phase flow","en","journal article","","","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:e7582270-8e3e-4cc6-af9d-6eca959a76d9","http://resolver.tudelft.nl/uuid:e7582270-8e3e-4cc6-af9d-6eca959a76d9","Ammonia removal from thermal hydrolysis dewatering liquors via three different deammonification technologies","Ochs, Pascal (Cranfield University; Thames Water Utilities Ltd.); Martin, Benjamin D. (Thames Water Utilities Ltd.); Germain, Eve (Thames Water Utilities Ltd.); Stephenson, Tom (Cranfield University); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Soares, Ana (Cranfield University)","","2020","The benefits of deammonification to remove nitrogen from sidestreams, i.e., sludge dewatering liquors, in municipal wastewater treatment plants are well accepted. The ammonia removal from dewatering liquors originated from thermal hydrolysis/anaerobic digestion (THP/AD) are deemed challenging. Many different commercial technologies have been applied to remove ammonia from sidestreams, varying in reactor design, biomass growth form and instrumentation and control strategy. Four technologies were tested (a deammonification suspended sludge sequencing batch reactor (S-SBR), a deammonification moving bed biofilm reactor (MEDIA), a deammonification granular sludge sequencing batch reactor (G-SBR), and a nitrification suspended sludge sequencing batch reactor (N-SBR)). All technologies relied on distinct control strategies that actuated on the feed flow leading to a range of different ammonia loading rates. Periods of poor performance were displayed by all technologies and related to imbalances in the chain of deammonification reactions subsequently effecting both load and removal. The S-SBR was most robust, not presenting these imbalances. The S-SBR and G-SBR presented the highest nitrogen removal rates (NRR) of 0.58 and 0.56 kg N m−3 d−1, respectively. The MEDIA and the N-SBR presented an NRR of 0.17 and 0.07 kg N m−3 d−1, respectively. This study demonstrated stable ammonia removal from THP/AD dewatering liquors and did not observe toxicity in the nitrogen removal technologies tested. It was identified that instrumentation and control strategy was the main contributor that enabled higher stability and NRR. Overall, this study provides support in selecting a suitable biological nitrogen removal technology for the treatment of sludge dewatering liquors from THP/AD.","Deammonification; Granular sludge; Moving bed biofilm reactor; Sequencing batch reactor; Suspended sludge; Thermal hydrolysis process, THP/AD","en","journal article","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:4145d5da-1a3d-4d25-a519-510f91c6e0d5","http://resolver.tudelft.nl/uuid:4145d5da-1a3d-4d25-a519-510f91c6e0d5","Continuous production of enzymes under carbon-limited conditions by Trichoderma harzianum P49P11","Gelain, L. (TU Delft OLD BT/Cell Systems Engineering; University of Campinas); Kingma, Esther (Student TU Delft); Geraldo da Cruz Pradella, José (Universidade Federal de Sao Paulo); Carvalho da Costa, Aline (University of Campinas); van der Wielen, L.A.M. (TU Delft BT/Bioprocess Engineering; University of Limerick); van Gulik, W.M. (TU Delft BT/Industriele Microbiologie)","","2020","Carbon-limited chemostat cultures were performed using different carbon sources (glucose, 10 and 20 g/L; sucrose, 10 g/L; fructose/glucose, 5.26/5.26 g/L; carboxymethyl cellulose, 10 g/L; and carboxymethyl cellulose/glucose, 5/5 g/L) to verify the capability of the wild type strain Trichoderma harzianum to produce extracellular enzymes. All chemostat cultures were carried out at a fixed dilution rate of 0.05 h−1. Experiments using glucose, fructose/glucose and sucrose were performed in duplicate. Glucose condition was found to induce the production of enzymes that can catalyse the hydrolysis of p-nitrophenyl-β-D-glucopyranoside (PNPGase). A concentration of 20 g/L of glucose in the feed provided the highest productivity (1048 ± 16 U/mol h). Extracellular polysaccharides were considered the source of inducers. Based on the obtained results, a new PNPGase production process was developed using mainly glucose. This process raises interesting possibilities of synthesizing the inducer substrate and the induced enzymes in a single step using an easily assimilated carbon source under carbon-limited conditions.","Carbon limitation; Continuous fermentation process; Enzyme production; Extracellular polysaccharides; Glucose; PNPGase","en","journal article","","","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:4c7e85d2-06c4-4d6b-a58a-ee24ba46186e","http://resolver.tudelft.nl/uuid:4c7e85d2-06c4-4d6b-a58a-ee24ba46186e","Identification of unstable subsurface rock structure using ground penetrating radar: An eemd-based processing method","Jin, J. (TU Delft Railway Engineering); Duan, Yunling (Tsinghua University)","","2020","Surrounding rock quality of underground caverns is crucial to structural safety and stability in geological engineering. Classic measures for rock quality investigation are destructive and time consuming, and therefore technology evolution for efficiently evaluating rock quality is significantly required. In this paper, the non-destructive technology ground penetrating radar (GPR) assisted by an ensemble empirical mode decomposition (EEMD)-based signal processing approach is investigated for identifying unstable subsurface rock structures. By decomposing the pre-processed GPR signals into multiple intrinsic mode functions (IMFs) and residues, one typical IMF can preserve the distinct local modes and is considered to reconstruct the subterranean profile. Promising results have been achieved in simple scenarios and filed measurements. The reconstructed profiles can accurately illustrate the subsurface interfaces and eliminate the interference signals. Unstable rock structures have been identified in further field applications. Therefore, the developed approach is efficient in unstable rock structure identification.","Ensemble empirical mode decomposition; Ground penetrating radar; Radar signal processing; Underground detection; Unstable rock structure identification","en","journal article","","","","","","","","","","","Railway Engineering","","",""
"uuid:98b06910-2a4e-44f6-b341-ed144ffbd484","http://resolver.tudelft.nl/uuid:98b06910-2a4e-44f6-b341-ed144ffbd484","Susceptible-infected-spreading-based network embedding in static and temporal networks","Zhan, X. (TU Delft Multimedia Computing); Li, Z. (TU Delft Web Information Systems); Masuda, Naoki (University at Buffalo, State University of New York); Holme, Petter (Tokyo Institute of Technology); Wang, H. (TU Delft Multimedia Computing)","","2020","Link prediction can be used to extract missing information, identify spurious interactions as well as forecast network evolution. Network embedding is a methodology to assign coordinates to nodes in a low-dimensional vector space. By embedding nodes into vectors, the link prediction problem can be converted into a similarity comparison task. Nodes with similar embedding vectors are more likely to be connected. Classic network embedding algorithms are random-walk-based. They sample trajectory paths via random walks and generate node pairs from the trajectory paths. The node pair set is further used as the input for a Skip-Gram model, a representative language model that embeds nodes (which are regarded as words) into vectors. In the present study, we propose to replace random walk processes by a spreading process, namely the susceptible-infected (SI) model, to sample paths. Specifically, we propose two susceptible-infected-spreading-based algorithms, i.e., Susceptible-Infected Network Embedding (SINE) on static networks and Temporal Susceptible-Infected Network Embedding (TSINE) on temporal networks. The performance of our algorithms is evaluated by the missing link prediction task in comparison with state-of-the-art static and temporal network embedding algorithms. Results show that SINE and TSINE outperform the baselines across all six empirical datasets. We further find that the performance of SINE is mostly better than TSINE, suggesting that temporal information does not necessarily improve the embedding for missing link prediction. Moreover, we study the effect of the sampling size, quantified as the total length of the trajectory paths, on the performance of the embedding algorithms. The better performance of SINE and TSINE requires a smaller sampling size in comparison with the baseline algorithms. Hence, SI-spreading-based embedding tends to be more applicable to large-scale networks.","Link prediction; Network embedding; SI spreading process; OA-Fund TU Delft","en","journal article","","","","","","","","","","","Multimedia Computing","","",""
"uuid:7a23eabb-f8e4-4c44-ae25-b9bfa483ca58","http://resolver.tudelft.nl/uuid:7a23eabb-f8e4-4c44-ae25-b9bfa483ca58","Process Intensification of Mesoporous Material's Synthesis by Microwave-Assisted Surfactant Removal","López-Pérez, Lidia (University Medical Center Groningen); López-Martínez, Marco Antonio (Universidad Autónoma Metropolitana-Unidad Azcapotzalco, Mexico City); Djanashvili, K. (TU Delft BT/Biocatalysis); Góra-Marek, Kinga (Jagiellonian University); Tarach, Karolina A. (Jagiellonian University); Borges, María Emma (University of la Laguna, San Cristóbal de la Laguna); Melián-Cabrera, Ignacio (University of la Laguna, San Cristóbal de la Laguna)","","2020","Mesoporous materials are of vital importance for use in separation, adsorption, and catalysis. The first step in their preparation consists of synthesizing an organic-inorganic hybrid in which a structuring directing agent (SDA, normally a surfactant) is used to provide the desired porosity. The most common method to eliminate the SDA, and generate the porosity, is high-Temperature calcination. Such a process is energy-intensive and slow. In this study, we investigated alternative nonthermal surfactant removal methods on a soft MCM-41 material, aiming at reducing the processing time and temperature, while maximizing the material's properties. The choice of a soft MCM-41 is critical since it is hydrothermally unstable, whereas the SDA removal is troublesome. Microwave processing yielded outstanding performance in terms of surfactant removal, structural preservation, and textural features; the surfactant was fully removed, the hexagonal structure was preserved, and the surface was highly rich in Si-OH groups. It is suggested that H2O2 is the dominant oxidant. In terms of the process features, the processing time is significantly reduced, 14 h (calcination) versus 5 min (microwaves), and the applied temperature is much lower. The energy savings were estimated to be 72% lower as compared to calcination; therefore, this approach contributes to the process intensification of a very relevant material's production.","energy-saving processing; HOoxidation; microwave-Assisted processing; mild SDA removal; quick-processing; structural preservation; structured mesoporous material","en","journal article","","","","","","Accepted Author Manuscript","","2021-10-26","","","BT/Biocatalysis","","",""
"uuid:635d7a4d-7fc0-45b4-9802-51924d7a64e9","http://resolver.tudelft.nl/uuid:635d7a4d-7fc0-45b4-9802-51924d7a64e9","Fast and robust low-rank approximation for five-dimensional seismic data reconstruction","Wu, Juan (Yangtze University, Wuhan); Bai, Min (Yangtze University, Wuhan); Zhang, D. (TU Delft ImPhys/Medical Imaging; TU Delft ImPhys/Computational Imaging); Wang, Hang (Zhejiang University); Huang, Guangtan (Zhejiang University); Chen, Yangkang (Zhejiang University)","","2020","Five-dimensional (5D) seismic data reconstruction becomes more appealing in recent years because it takes advantage of five physical dimensions of the seismic data and can reconstruct data with large gap. The low-rank approximation approach is one of the most effective methods for reconstructing 5D dataset. However, the main disadvantage of the low-rank approximation method is its low computational efficiency because of many singular value decompositions (SVD) of the block Hankel/Toeplitz matrix in the frequency domain. In this paper, we develop an SVD-free low-rank approximation method for efficient and effective reconstruction and denoising of the seismic data that contain four spatial dimensions. Our SVD-free rank constraint model is based on an alternating minimization strategy, which updates one variable each time while fixing the other two. For each update, we only need to solve a linear least-squares problem with much less expensive QR factorization. The SVD-based and SVD-free low-rank approximation methods in the singular spectrum analysis (SSA) framework are compared in detail, regarding the reconstruction performance and computational cost. The comparison shows that the SVD-free low-rank approximation method can obtain similar reconstruction performance as the SVD-based method but with a large computational speedup.","Low-rank approximation; Matrix completion; Multidimensional seismic data; Seismic data processing; Seismic reconstruction","en","journal article","","","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:9cac5646-4b2b-4dea-a3a7-b945815b4a2f","http://resolver.tudelft.nl/uuid:9cac5646-4b2b-4dea-a3a7-b945815b4a2f","Control of a gas-liquid inline swirl separator based on tomographic measurements","Martinez Garcia, M. (TU Delft ChemE/Transport Phenomena); Sahovic, B. (Helmholtz Zentrum Dresden Rossendorf); Sattar, M.A. (Lodz University of Technology); Atmani, H. (Université de Toulouse); Schleicher, E. (Helmholtz Zentrum Dresden Rossendorf); Hampel, U. (Helmholtz Zentrum Dresden Rossendorf); Babout, L. (Lodz University of Technology); Legendre, D. (Université de Toulouse); Portela, L. (TU Delft ChemE/Transport Phenomena)","","2020","This text structures the application of Wire-Mesh sensors and Electrical Resistance Tomography in the control of an Inline Swirl Separator. It introduces a mechanistic model of the two-phase flow inside the device, which is linearized around an ideal perfect operation, and implemented in a Model Predictive Controller. The whole text is structured aiming at a future real application of the controller, briefly introducing the setup that is going to be used, the sensors and their working principles. The results obtained show a stable controller, able to regulate the process relatively fast in relation to the time resolution of the sensors. The positive response of the approach stimulates further improvements in the model developed, and the implementation of more sophisticated techniques to handle the non-linearities of the process.","Cyclone; Gas-liquid flow; Model based control; Process control; Swirl separator; Tomography","en","journal article","","","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:7b3a49b6-64d7-49e9-b631-aabcba6d1981","http://resolver.tudelft.nl/uuid:7b3a49b6-64d7-49e9-b631-aabcba6d1981","Designing and implementing gamification: GaDeP, gamifire, and applied case studies","Klemke, Roland (Open University of the Netherlands; Cologne Game Lab); Antonaci, Alessandra (European Association of distance Teaching Universities (EADTU)); Limbu, B.H. (TU Delft Web Information Systems)","","2020","Gamification aims at addressing problems in various fields such as the high dropout rates, the lack of engagement, isolation, or the lack of personalisation faced by Massive Open Online Courses (MOOC). Even though gamification is widely applied, not only in MOOCs, only few cases are meaningfully designed and empirically tested. The Gamification Design Process (GaDeP) aims to cover this gap. This article first briefly introduces GaDeP, presents the concept of meaningful gamification, and derives how it motivates the need for the Gamifire platform (as a scalable and platform-independent reference infrastructure for MOOC). Secondly, it defines the requirements for platformindependent gamification and describes the development of the Gamifire infrastructure. Thirdly we describe how Gamifire was successfully applied in four different cases. Finally, the applicability of GaDeP beyond MOOC is presented by reporting on a case study where GaDeP has been successfully applied by four student research and development projects. From both, the Gamifire cases and the GaDeP cases we derive the key contribution of this article: insights in the strengths and weaknesses of the Gamifire infrastructure as well as lessons learned about the applicability and limitations of the GaDeP framework. The paper ends detailing our future works and planned development activities.","Architecture; Evaluation; GaDeP; Gamification; Gamification design process; Gamifire; Infrastructure; MOOC; Platform independence; Scalability; Transfer; Validation","en","journal article","","","","","","","","","","","Web Information Systems","","",""
"uuid:205cb887-23c7-4cf4-82d9-2618ff335b86","http://resolver.tudelft.nl/uuid:205cb887-23c7-4cf4-82d9-2618ff335b86","Process intensification education contributes to sustainable development goals. Part 2","Fernandez Rivas, David (University of Twente); Boffito, Daria C. (Polytechnique Montreal); Faria-Albanese, Jimmy (University of Twente); Glassey, Jarka (Newcastle University); Afraz, Nona (Otto-von-Guericke University); Akse, Henk (Process Intensification Network); Boodhoo, Kamelia V.K. (Newcastle University); Bos, Rene (Universiteit Gent); Cantin, Judith (Polytechnique Montreal); (Emily) Chiang, Yi Wai (University of Guelph); Commenge, Jean Marc (Lorraine University); Dubois, Jean Luc (Corporate R&D); Galli, Federico (Polytechnique Montreal); de Mussy, Jean Paul Gueneau (Katholieke Universiteit Leuven); Harmsen, Jan (Harmsen Consultancy BV); Kalra, Siddharth (Student TU Delft); Keil, Frerich J. (Hamburg University of Technology); Morales-Menendez, Ruben (Tecnologico de Monterrey); Navarro-Brull, Francisco J. (Universitat d'Alacant); Noël, Timothy (Eindhoven University of Technology); Ogden, Kim (University of Arizona); Patience, Gregory S. (Polytechnique Montreal); Reay, David (Newcastle University); Santos, Rafael M. (University of Guelph); Smith-Schoettker, Ashley (RAPID Manufacturing Institute); Stankiewicz, A.I. (TU Delft Intensified Reaction and Separation Systems); van den Berg, Henk (University of Twente); van Gerven, Tom (Katholieke Universiteit Leuven); van Gestel, Jeroen (Universiteit Utrecht); van der Stelt, Michiel (Universiteit Utrecht); van de Ven, Mark (Rijksinstituut voor Volksgezondheid en Milieu (RIVM)); Weber, R. S. (Pacific Northwest National Laboratory)","","2020","Achieving the United Nations sustainable development goals requires industry and society to develop tools and processes that work at all scales, enabling goods delivery, services, and technology to large conglomerates and remote regions. Process Intensification (PI) is a technological advance that promises to deliver means to reach these goals, but higher education has yet to totally embrace the program. Here, we present practical examples on how to better teach the principles of PI in the context of the Bloom’s taxonomy and summarise the current industrial use and the future demands for PI, as a continuation of the topics discussed in Part 1. In the appendices, we provide details on the existing PI courses around the world, as well as teaching activities that are showcased during these courses to aid students’ lifelong learning. The increasing number of successful commercial cases of PI highlight the importance of PI education for both students in academia and industrial staff.","Chemical engineering; Education challenge; Entrepreneurship; Industry challenge; Pedagogy; Process design; Process intensification; Sustainability","en","journal article","","","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:ae542d13-795a-48b4-b457-b22e9d0a495e","http://resolver.tudelft.nl/uuid:ae542d13-795a-48b4-b457-b22e9d0a495e","Predicting the infuence of Urban vacant lots on neighborhood property values","Rahman, Muhammad Fazalul (Rochester Institute of Technology); Murukannaiah, P.K. (TU Delft Interactive Intelligence); Sharma, Naveen (Rochester Institute of Technology)","Janakiram, D. (editor); Sharma, N. (editor); Srinivasa, S. (editor)","2020","Vacant lots are municipally-owned land parcels which were acquired post-abandonment or due to tax foreclosures. With time, failure to sell or find alternate uses for vacant lots results in them causing adverse effects on the health and safety of residents, and cost the city both directly and indirectly. Although existing research has tried to define these impacts, cities need quantifiable evidence from within the city to make planning decisions based on these studies. Moreover, trying to understand the impact of vacant lots in an uncontrolled setting makes it difficult to perform A key problem with existing methodologies is that they tend to look at the city as a whole, while ignoring the diverse socioeconomic factors at play. Altogether, city planners are left with little or no actionable information to prioritize conversion of vacant lots. In contrast, for our research we try to model the city as blocks, census tracts and neighborhoods while using relevant features to capture key demographic, economic and geographic characteristics. In addition, we build a deep learning model to quantify the impact of vacant lots on changing property values so as to recommend conversions that yields the maximum benefit through property value tax increase. Our results indicate that our model is able to capture the relationship between vacant lots and property values better than conventionally used algorithms and data models. Further, our model specifically caters to small and mid size cities, which are often neglected in the mainstream urban computing research.","Computational social science; Deep learning; Gaussian processes; Spatiotemporal data; Urban computing; Vacant lots","en","conference paper","CEUR-WS","","","","","","","","","","Interactive Intelligence","","",""
"uuid:47ebf984-0d3a-4892-b6c9-6fc9b335a6ab","http://resolver.tudelft.nl/uuid:47ebf984-0d3a-4892-b6c9-6fc9b335a6ab","The open design education approach: An integrative teaching and learning concept for management and engineering","Binnekamp, R. (TU Delft Real Estate Management); Wolfert, A.R.M. (TU Delft Integral Design & Management); Kammouh, O. (TU Delft Integral Design & Management); Nogal Macho, M. (TU Delft Integral Design & Management)","Cardoso, Alberto (editor); Alves, Gustavo R. (editor); Restivo, Teresa (editor)","2020","Construction Management and Engineering students need to acquire managing skills for solving real-world problems that are complex, rarely straightforward and lack 'one right answer'. For this, they need to become 'open designers', capable to be reflective, integrative and creative in- and on action with dynamic and new situations. In this paper, the so-called Open Design Learning Circle (ODLC) will be proposed as an innovative educational concept in which engineering-, management- and pedagogic sciences are integrated. Within this concept the students 'dialogue' with: 1) an objective open glass box model covering engineering products and management processes (outer) and, 2) their subjective open human threefold, reflecting their personal learning (inner). The integration of both human and model dialogues is essential for the emergence of new knowledge and creative insights for open designs, which is essentially distinct from more traditional learning concepts. To enable this emergence, a self-chosen system of interest is the 'experiential vehicle' that forms the basis for a self-created textbook and model. Thereby, the ODLC forms the fundamental basis for creating 'open and persistent learners'. In this paper, it also will be shown how the ODLC can be operationalized into a learning cycle and how it has been implemented in an example course on systems engineering management within the MSc Construction Management Engineering curriculum at the TU Delft. Finally, some preliminary student findings and next steps for further research are discussed.","Co-creating and co-sensing; Co-reflecting; Construction Management and Engineering; Experiential Learning; Integrative Education; Management Process/Engineering Product/Learning Person; Open Design Learning Circle/ Cycle; Problem Solving; System of Interest","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-12-25","","","Real Estate Management","","",""
"uuid:93bef689-a9ff-452d-90d0-bbca7e7209a6","http://resolver.tudelft.nl/uuid:93bef689-a9ff-452d-90d0-bbca7e7209a6","Polarimetric imaging mode of VLT/SPHERE/IRDIS: I. Description, data reduction, and observing strategy","De Boer, J. (Universiteit Leiden); Langlois, M. (Ecole Normale Supérieure de Lyon; Laboratoire d'Astrophysique de Marseille); Van Holstein, R. G. (Universiteit Leiden; European Southern Observatory (ESO)); Girard, J. H. (Space Telescope Science Institute); Mouillet, D. (Université Grenoble Alpes); Vigan, A. (Laboratoire d'Astrophysique de Marseille); Dohlen, K. (Laboratoire d'Astrophysique de Marseille); Snik, F. (Universiteit Leiden); Stam, D.M. (TU Delft Astrodynamics & Space Missions)","","2020","Context. Polarimetric imaging is one of the most effective techniques for high-contrast imaging and for the characterization of protoplanetary disks, and it has the potential of becoming instrumental in the characterization of exoplanets. The Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) instrument installed on the Very Large Telescope (VLT) contains the InfraRed Dual-band Imager and Spectrograph (IRDIS) with a dual-beam polarimetric imaging (DPI) mode, which offers the capability of obtaining linear polarization images at high contrast and resolution. Aims. We aim to provide an overview of the polarimetric imaging mode of VLT/SPHERE/IRDIS and study its optical design to improve observing strategies and data reduction. Methods. For H-band observations of TW Hydrae, we compared two data reduction methods that correct for instrumental polarization effects in different ways: a minimization of the ""noise""image (Uφ), and a correction method based on a polarimetric model that we have developed, as presented in Paper II of this study. Results. We use observations of TW Hydrae to illustrate the data reduction. In the images of the protoplanetary disk around this star, we detect variability in the polarized intensity and angle of linear polarization that depend on the pointing-dependent instrument configuration. We explain these variations as instrumental polarization effects and correct for these effects using our model-based correction method. Conclusions. The polarimetric imaging mode of IRDIS has proven to be a very successful and productive high-contrast polarimetric imaging system. However, the instrument performance is strongly dependent on the specific instrument configuration. We suggest adjustments to future observing strategies to optimize polarimetric efficiency in field-tracking mode by avoiding unfavorable derotator angles. We recommend reducing on-sky data with the pipeline called IRDAP, which includes the model-based correction method (described in Paper II) to optimally account for the remaining telescope and instrumental polarization effects and to retrieve the true polarization state of the incident light.","Polarization; Protoplanetary disks; Techniques: high angular resolution; Techniques: image processing; Techniques: polarimetric","en","journal article","","","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:8eb3b72d-aff1-4e83-aef1-f7568b49426d","http://resolver.tudelft.nl/uuid:8eb3b72d-aff1-4e83-aef1-f7568b49426d","Speech technology for unwritten languages","Scharenborg, O.E. (TU Delft Multimedia Computing; Radboud Universiteit Nijmegen); Besacier, Laurent (LIG); Black, Alan W. (Carnegie Mellon University); Hasegawa-Johnson, Mark (University of Illinois at Urbana-Champaign); Metze, Florian (Carnegie Mellon University); Neubig, Graham (Carnegie Mellon University); Stueker, Sebastian (Karlsruhe Institut für Technologie); Godard, Pierre (LIMSI, ele-de-France); Mueller, M (Karlsruhe Institut für Technologie)","","2020","Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible.","Speech processing; automatic speech recognition; image retrieval; speech synthesis; unsupervised learning","en","journal article","","","","","","","","","","","Multimedia Computing","","",""
"uuid:623aaa81-c694-4a27-bad8-2fabe77c7eed","http://resolver.tudelft.nl/uuid:623aaa81-c694-4a27-bad8-2fabe77c7eed","Dynamic vulnerability assessment of process plants with respect to vapor cloud explosions","Chen, C. (TU Delft Safety and Security Science); Khakzad, Nima (Toronto Metropolitan University); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Katholieke Universiteit Leuven)","","2020","Vapor cloud explosion (VCE) accidents in recent years such as the Buncefield accident in 2005 indicate that VCEs in process plants may lead to unpredicted overpressures, resulting in catastrophic disasters. Although a lot of attempts have been done to assess VCEs in process plants, little attention has been paid to the spatial-temporal evolution of VCEs. This study, therefore, aims to develop a dynamic methodology based on discrete dynamic event tree to assess the likelihood of VCEs and the vulnerability of installations. The developed methodology consists of six steps: (i) identification of hazardous installations and potential loss of containment (LOC), (ii) analysis of vapor cloud dispersion, (iii) identification and characterization of ignition sources, (iv) explosion frequency and delayed time assessment using the dynamic event tree, (v) overpressure calculation by the Multi-Energy method and (vi) damage assessment based on probit models. This methodology considers the time dependencies in vapor cloud dispersion and in the uncertainty of delayed ignitions. Application of the methodology to a case study shows that the methodology can reflect the characteristics of large VCEs and avoid underestimating the consequences. Besides, this study indicates that ignition control may be regarded as a delay measure, effective emergency actions are needed for preventing VCEs.","Dynamic event tree; Process plants; Spatial-temporal evolution; Uncertainty modeling; Vapor cloud explosion","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:b14b4d40-20cc-4199-8791-70fd67058cb8","http://resolver.tudelft.nl/uuid:b14b4d40-20cc-4199-8791-70fd67058cb8","Symmetric simple exclusion process in dynamic environment: Hydrodynamics","Redig, F.H.J. (TU Delft Applied Probability); Saada, Ellen (University of Paris); Sau, Federico (Institute of Science and Technology (IST Austria))","","2020","We consider the symmetric simple exclusion process in Zd with quenched bounded dynamic random conductances and prove its hydrodynamic limit in path space. The main tool is the connection, due to the self-duality of the process, between the invariance principle for single particles starting from all points and the macroscopic behavior of the density field. While the hydrodynamic limit at fixed macroscopic times is obtained via a generalization to the time-inhomogeneous context of the strategy introduced in [41], in order to prove tightness for the sequence of empirical density fields we develop a new criterion based on the notion of uniform conditional stochastic continuity, following [50]. In conclusion, we show that uniform elliptic dynamic conductances provide an example of environments in which the so-called arbitrary starting point invariance principle may be derived from the invariance principle of a single particle starting from the origin. Therefore, our hydrodynamics result applies to the examples of quenched environments considered in, e.g., [1], [3], [6] in combination with the hypothesis of uniform ellipticity.","Arbitrary starting point invariance principle; Dynamic random conduc-tances; Hydrodynamic limit; Symmetric simple exclusion process; Tightness criterion","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:5f6b509b-2723-41d6-939a-200f61badd0b","http://resolver.tudelft.nl/uuid:5f6b509b-2723-41d6-939a-200f61badd0b","Compressed-Domain Detection and Estimation for Colocated MIMO Radar","Tohidi, Ehsan (EURECOM Ecole d'Ingénieur et Centre de Recherche en Sciences du Numérique); Hariri, Alireza (Sharif University of Technology); Behroozi, Hamid (Sharif University of Technology); Nayebi, Mohammad Mahdi (Sharif University of Technology); Leus, G.J.T. (TU Delft Signal Processing Systems); Petropulu, Athina P. (Rutgers University–New Brunswick)","","2020","This article proposes a compressed-domain signal processing (CSP) multiple-input multiple-output (MIMO) radar, a MIMO radar approach that achieves substantial sample complexity reduction by exploiting the idea of CSP. CSP MIMO radar involves two levels of data compression followed by target detection at the compressed domain. First, compressive sensing is applied at the receive antennas, followed by a Capon beamformer, which is designed to suppress clutter. Exploiting the sparse nature of the beamformer output, a second compression is applied to the filtered data. Target detection is subsequently conducted by formulating and solving a hypothesis testing problem at each grid point of the discretized angle space. The proposed approach enables an eightfold reduction of the sample complexity in some settings as compared to a conventional compressed sensing (CS) MIMO radar, thus enabling faster target detection. Receiver operating characteristic curves of the proposed detector are provided. Simulation results show that the proposed approach outperforms recovery-based CS algorithms.","Capon beamformer; clutter suppression; colocated multiple-input multiple-output (MIMO) radar; compressed-domain signal processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-06-30","","","Signal Processing Systems","","",""
"uuid:6f6da48a-d348-4f15-8ea7-38841e3dddd0","http://resolver.tudelft.nl/uuid:6f6da48a-d348-4f15-8ea7-38841e3dddd0","Reaching zero-defect manufacturing by compensation of dimensional deviations in the manufacturing of rotating hollow parts","Eger, F. (University of Stuttgart); Reiff, C. (University of Stuttgart); Tempel, P. (TU Delft Mechatronic Systems Design); Magnanini, M. C. (Politecnico di Milano); Caputo, D. (GKN Aerospace Engine Systems Norway); Lechler, A. (University of Stuttgart); Verl, A. (University of Stuttgart)","","2020","In many sectors such as the aerospace industry, the manufacturing of rotating components is based on multi-stage production systems to achieve the complex requirements of high quality products. Even in the presence of Industry 4.0 and the increasing connectivity, these systems are very prone to failure due to the high level of potential influences of both the system and the products, ultimately leading to defects. The project “ForZDM”, funded by the EU under Horizon2020, envisions reducing scrap rate by avoiding and compensating defects at an early stage thus guaranteeing a high quality product. This paper presents an approach using an existing manufacturing line to compensate the dimensional deviations of an inner contour of a turbine shaft at an early stage. Based on measurements of the inner contour, a new rotation axis for the subsequent manufacturing processes is calculated in order to avoid unbalances at the end-of-line control. Different algorithms are developed and integrated in a web-based application to find an optimal rotation axis under consideration of the to-be-manufactured outer contour in an operator-friendly usage on the shop floor. The application is connected with the measurement system and the subsequent CNC machine which enables automatic execution and data transfer.","Downstream compensation; Error compensation; Manufacturing process; Multi-stage; Unbalance; Zero-defect manufacturing","en","journal article","","","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:ec000cce-f827-4955-997b-4310bbc656db","http://resolver.tudelft.nl/uuid:ec000cce-f827-4955-997b-4310bbc656db","Portfolio-based airline fleet planning under stochastic demand","Sa, Constantijn A.A. (Student TU Delft); Santos, Bruno F. (TU Delft Air Transport & Operations); Clarke, J.B. (Georgia Institute of Technology)","","2020","Airlines operate their fleet of aircraft over a relatively long time horizon during which the realized stochastic demand has the potential to profoundly impact the airlines’ financial performance. This makes the investment in a fleet of aircraft a highly capital-intensive long-term commitment, associated with inherent risks. We propose an innovative three-step airline fleet planning methodology with the primary objective of identifying fleets that are robust to stochastic demand realizations. The methodology presents two main innovation aspects. The first one is the use of the mean reverting Ornstein–Uhlenbeck process to model the long-term travel demand, which is then combined with discrete-time Markov chain transitions to generate demand scenarios. The second innovative aspect is the adoption of a portfolio-based fleet planning perspective that allows for an explicit comparison of different fleets, in size and composition. Ultimately, the methodology yields for each fleet in the portfolio a distribution of net present values of operating profit across the planning horizon and a list of key financial and operational metrics per year. The robustest fleet can be selected based on the operating profit generating capability across different realizations of stochastic demand. An illustrative case study is presented as a proof of concept. The case study is used to demonstrate the type of results obtained and to discuss the usefulness of the methodology proposed.","Airline fleet planning; Ornstein–Uhlenbeck process; Portofolio-based planning; Robust planning","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-12-08","","","Air Transport & Operations","","",""
"uuid:c4b417ab-a88e-47e1-abf1-d4cad9daebf2","http://resolver.tudelft.nl/uuid:c4b417ab-a88e-47e1-abf1-d4cad9daebf2","Haptics: Science, Technology, Applications: 12th International Conference, EuroHaptics 2020, Leiden, The Netherlands, September 6–9, 2020, Proceedings","","Nisky, Ilana (editor); Hartcher-O'Brien, J. (editor); Wiertlewski, M. (editor); Smeets, Jeroen B J (editor)","2020","This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020.
The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility.
To reduce the life cycle cost and failure rate of catenary in practice, planned and predictive maintenance is desired based on the condition monitoring of catenary. However, the monitoring data are underutilized to effectively assess the catenary condition and facilitate maintenance decision-making. This dissertation contributes in improving the dynamic condition assessment of catenary using the data from condition monitoring. New performance indicators (PIs) of catenary are defined in a way that is adaptive to the variations of monitoring data measured under different circumstances, such as the changes of catenary structure, pantograph type and train speed. The relationship between the monitoring data and the contact wire irregularities is studied using historical data and simulations. Data-based approaches are developed for the quantitative assessment of dynamic catenary condition.
First, an intrinsic wavelength contained in the pantograph-catenary contact force is identified and defined as the catenary structure wavelength (CSW). It is caused by the periodic variations of contact wire stiffness attributed to the cyclic structure of catenary that must regulate the height of contact wire in every spans and interdropper distances. An approach that adaptively extracts the CSWs of pantograph-catenary contact force is proposed based on the empirical mode decomposition algorithm. It extracts the CSW signals corresponding to the span lengths and interdropper distances, respectively, summing to form a characteristic signal of CSWs. The residual signal of the contact force excluding the CSWs is regarded as the non-CSW signal. The mean and standard deviation of the CSWs signal are used as PIs to indicate the condition of the main catenary geometric parameters. A PI based on the quadratic time-frequency representation of the non-CSW signal is proposed for detecting and localizing the local irregularities of contact wire. The proposed PIs are tested by simulation and measurement data and proven effective and adaptive owning to the use of CSWs and non-CSW signal.
Second, the concept of CSW is expanded to the pantograph head acceleration from which the CSWs and non-CSW signal can also be extracted using the same approach developed for the contact force. Considering the characteristics of pantograph head acceleration, the wavelet packet entropy of the CSWs and non-CSW signal is proposed as PIs for detecting contact wire irregularities with different lengths. The entropy of CSWs is used for detecting irregularities with a length longer than 5 m, while the entropy of non-CSW signal is for the short-length local irregularities. An approach to detect and verify contact wire irregularities using the measurement data of pantograph head vertical acceleration from frequent inspections is proposed. The approach is tested using historical inspection data from which irregularities at all lengths are detected and verified. Maintenance resources can thus be specifically allocated to verified detection results to save cost and time.
Third, through analyzing historical inspection data and data-based simulation results, it is found that while the contact wire irregularity deteriorates the pantograph-catenary interaction, the formation of irregularity is also associated with the effects of the interaction like variations of contact and friction forces. Concretely, the contact wire height irregularity with an amplitude of 8 mm can cause considerable increase in the standard deviation of pantograph-catenary contact force. In addition, the irregularity with a certain wavelength can induce the dynamic response with the same wavelength in the contact force. This in turn makes the irregularity part deteriorating faster than the other parts of catenary. At a smaller scale, when the wear irregularity of contact wire has an average wire thickness loss of about 1.5 mm, it can also increase the standard deviation of contact force by more than 5%. Due to the fixing effect at the registration arms and droppers, the wear irregularity commonly contains structural wavelengths of catenary including span lengths and interdropper distances. It is also found that the wear irregularity tends to grow and spread toward in the common or dominant running direction of trains in the specific line. Nevertheless, an existing defect may not affect every pantograph passage and every type of data measured. It is thus advised to measure multiple types of data and perform more frequent inspections to avoid undetected defects.
Last, a data-driven approach using the Bayesian network (BN) to fuse the available inspection data of catenary into an integrated PI is proposed. The BN topology is first structured based on the physical relations between five data types including the train speed, dynamic stagger and height of contact wire, pantograph head acceleration, and pantograph-catenary contact force. Then, tailored PIs are individually defined and extracted from the five types of data as the BN input. As the output of BN, an integrated PI is defined as the overall condition level of catenary considering all defects that can be reflected by the five types of data. Finally, using historical inspections data and maintenance records from a section of high-speed line, the BN parameters are estimated to establish a probabilistic relationship between the input and the output PI. By testing the BN-based approach using new inspection data from the same railway line, it is shown that the integrated PI can adequately represent the catenary condition, leading to considerable reduction in the false alarm rate of catenary defect detection compared with the current practice. The approach can also work acceptably with noisy or partly missing data.
In summary, this dissertation answers how to adequately transform the condition monitoring data of catenary into quantitative assessments of the dynamic catenary condition. The proposed approaches are intended for generic implementations in railway catenaries worldwide.","railway catenary; condition assessment; pantograph-catenary interaction; performance indicator; adaptive data processing; data-driven approach; catenary structure wavelength","en","doctoral thesis","","978-94-6323-962-2","","","","","","2019-11-18","","","Railway Engineering","","",""
"uuid:d10bd574-f6af-4766-931b-8c11e0fafcb2","http://resolver.tudelft.nl/uuid:d10bd574-f6af-4766-931b-8c11e0fafcb2","Fuelling the hydrogen economy: Scale-up of an integrated formic acid-to-power system","van Putten, R. (TU Delft ChemE/Inorganic Systems Engineering); Wissink, Tim (Eindhoven University of Technology); Swinkels, Tijn (Eindhoven University of Technology; Automotive Campus, Helmond); Pidko, E.A. (TU Delft ChemE/Inorganic Systems Engineering; TU Delft ChemE/Algemeen)","","2019","Transitioning from fossil fuels to sustainable and green energy sources in mobile applications is a difficult challenge and demands sustained and highly multidisciplinary efforts in R&D. Liquid organic hydrogen carriers (LOHC) offer several advantages over more conventional energy storage solutions, but have not been yet demonstrated at scale. Herein we describe the development of an integrated and compact 25 kW formic acid-to-power system by a team of BSc and MSc students. We highlight a number of key engineering challenges encountered during scale-up of the technology and discuss several aspects commonly overlooked by academic researchers. Conclusively, we provide a critical outlook and suggest a number of developmental areas currently inhibiting further implementation of the technology.","Chemical process development; Dehydrogenation; Energy production; Formic acid; Homogeneous catalysis; Hydrogen","en","journal article","","","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:155cd8db-619f-49f6-a3db-1bdb6036420c","http://resolver.tudelft.nl/uuid:155cd8db-619f-49f6-a3db-1bdb6036420c","Spreading on Networks","Liu, Q. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Delft University of Technology (degree granting institution)","2019","Spreading phenomena such as spreading of diseases, information and computer viruses are ubiquitous in nature and man-made systems, but the understanding of them is still insufficient. This dissertation focuses on the analysis of a basic mathematical model of spreading phenomena running on underlying network structures and aims to complete the basic theory of spreading processes. Specifically, we explore the Susceptible-Infected-Susceptible (SIS) model from several interesting perspectives to contribute to the state-of-the-art understanding of the model.
Our first main contribution is related to temporal correlations. In most of the studies, the influence of time in the SIS spreading process is omitted because the specific value of the infection and curing rates does not influence the first-moment metastable properties, such as the infection probability of each node. Only the ratio between the two rates matters. In this dissertation, we show that the temporal correlation can be analyzed with the mean-field approaches, although mean-field methods are meant to only analyze first-moment properties. We derive the autocorrelation of the nodal infection state both in the steady and transient states under the mean-field approximation. By analyzing the autocorrelation, we indicate the influence of the underlying network and the value of the infection and curing rates on the temporal properties of the spreading process. We also show that the infection and curing rates can be calculated by measuring the infection state of each node.
Second, we relax the Markovian assumption in the SIS process by extending the Poisson infection process to a Weibull renewal process. The Poisson infection process is just a special case of the Weibullian renewal process. Under this Weibullian framework, we can parameterize the non-Markovian infection behavior and show some new features raised by it. We specifically focus on an extreme (limiting) case of the Weibullian SIS process where the distribution of the infection time is a Dirac delta function. The analysis of the extreme case leads to the largest possible epidemic threshold for non-Poissonian infection processes. We further discuss the epidemic threshold for different infection processes with Weibull, lognormal and Gamma distributed infection time, which fit realistic spreading phenomena well, under a previous non-Markovian mean-field method based on renewal theory. We show consistency between our results and previous theory and that those different infection processes behave similarly.
Third, we dive into the localization phenomena in networks from the viewpoint of SIS spreading processes. Localization of the spreading process appears just above the epidemic threshold in networks whose principal eigenvector of the adjacency matrix is localized. In the localized spreading, the prevalence (order parameter), which is the expected fraction of infected nodes, converges to zero with the increase of network size but the number of infected nodes is non-zero. Thus, the localized spreading forms an interesting phase different from the all-healthy phase (no infection) and the endemic phase (non-zero prevalence). We evaluate the above-mentioned extreme case of the Weibullian SIS process where the time-dependent prevalence is periodic in the long-run. Near the epidemic threshold, the ratio between the steady-state maximum and minimum prevalence, which equals to the largest eigenvalue of the adjacency matrix, diverges in some networks, but the spreading process is still localized. In other words, the divergent ratio of prevalence, determined by the largest eigenvalue of the network, cannot amplify a zero-prevalence to a non-zero one in the thermodynamic limit. The result indicates that the localization of spreading processes may be only determined by the network structure but not the specific infection process.
Finally, we study the curing strategy for the control of the spreading process, specifically, the pulse curing strategy. Compared to the classical asynchronous curing strategy (for instance Poissonian), pulse strategy is an optimized method of suppressing the spreading and applied broadly in disease control. Here, we study the model which is composed of a susceptible-infected process and a periodical pulse curing process with a successful curing probability below one. We derive the mean-field epidemic threshold. Based on our analysis, the pulse strategy reduces the number of curing operations by $36.8\%$ compared to traditional asynchronous curing strategies in the Markovian SIS model.
All the above-mentioned theoretical analyses are verified by directly simulating SIS processes.","Spreading Process; Complex Networks; Stochastic Simulation","en","doctoral thesis","","978-94-6384-074-3","","","","","","","","","Network Architectures and Services","","",""
"uuid:30966f68-cea2-4669-93da-23a477d0978b","http://resolver.tudelft.nl/uuid:30966f68-cea2-4669-93da-23a477d0978b","Detection of factors that determine the quality of industrial minerals: An infrared sensor-based approach for mining and process control","Guatame-Garcia, Adriana (TU Delft Resource Engineering)","Buxton, M.W.N. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2019","Industrial minerals are essential to human activity. The products derived from them make an integral part of a wide range of materials that are ubiquitously present in our daily lives. The performance and attributes of these materials depend significantly on the properties and quality of the industrial minerals and the products generated from them. These characteristics are ensured by the selection and mining of adequate ores, and by using various beneficiation and processing strategies to modify or enhance the original properties of the minerals.
One example of these strategies is calcination, in which the minerals are subject to thermal treatment. The success of the generation of high-quality products by using this technique partly depends on the capability of the plant to detect the factors that can degrade the quality of the raw ore, feed for calcination and final product. It also depends on its ability to inform and adapt the operations according to the presence of such factors. A possible approach for doing this is to characterise the minerals and materials with sensor technologies that can generate information on-site and in real-time, focusing on the identification of the degrading factors. Their timely detection can give operational feedback to the process and aid in the generation of high-quality products.
This Thesis aims to develop methods for the detection of factors that determine the quality of industrial mineral products by using data derived from infrared sensors, which have the potential to be implemented in mining and process control. For doing this, kaolin, perlite and diatomite have been selected as commodities that are relevant to the market and that represent different applications. This research shows the capacity of infrared sensor-based technologies to retrieve information, directly or indirectly, about the factors that affect the quality of industrial minerals at a lower cost and with comparable efficiency to other analytical methods.
In parallel to the improvements in computer to computer communication, the emergence of new paradigms such as the Internet of Things (IoT), Big Data processing and cloud computing in recent years has placed an increasing importance on networked systems in many facets of the modern world. From power grid management, to autonomous vehicle navigation, to even our basic means of interaction through social media, these networks are a pervasive presence in our day to day lives. The vast amounts of data generated by these networks and their ever increasing sizes makes it impractical if not impossible to resort to traditional centralized processing and therefore necessitates the search for new methods of signal processing within networked systems.
In this thesis we approach the task of distributed signal processing by exploiting the synergy between such tasks and equivalent convex optimization problems. Specifically, we focus on the task of distributed convex optimization, that of solving optimization problems involving groups of computers in a collaborative manner and the development of distributed solvers for such tasks. Such solvers distinguish themselves by only allowing local computations at each computer in a network and the exchange of information between connected computers. In this way, distributed solvers naturally respect the structure of the underlying network in which they are deployed.
In the pursuit of our goal, we approach the task of distributed solver design via the lens of monotone operator theory. Providing a well known platform for the derivation of many first order convex solvers, herein we demonstrate the use of this theory as a means of constructing and analyzing a number of algorithms for distributed optimization. The first major contribution of this thesis lies in the analysis and understanding of an existing algorithm for distributed optimization within the literature termed the primal dual method of multipliers (PDMM). In particular, by demonstrating a novel interpretation of PDMM from the perspective of monotone operator theory we are able to better understand its convergent characteristics and highlight sufficient conditions for which PDMM will converge at a geometric rate. Furthermore we quantify the impact that network topology has on these convergence rates, drawing a direct connection between spectral characteristics of networks and distributed optimization.
Secondly, we explored the space of solver design by proposing novel algorithms for distributed networks. For the family of separable optimization problems, those with separable objectives and constraints, we demonstrated a distributed solver design using a specific lifted dual form. Based on monotone operator theory, the convergence analysis of the proposed method followed naturally from well known results and broadened the class of distributable problems compared to the likes of PDMM. Furthermore, in the case of time-varying consensus problems, we again proposed a new algorithm by combining a network dependent metric choice with classic operator splitting methods. Again the monotone basis of this algorithm facilitated the convergence analysis of this method which empirically was also shown to converge for general closed, convex and proper functions.
Finally, we demonstrated how these methods could be used for practical distributed signal processing in networks by considering the case of multichannel speech enhancement in wireless acoustic sensor networks. By combining a particular modeling of the acoustic scene with the algorithms mentioned above, the proposed method was not only distributable but also offered increased resilience to steering vector mismatch than other standard approaches. This example also highlights the importance of understanding both the target application and the distributed solvers themselves in developing effective solutions.
Overall, this thesis provides a first foray into the world of distributed optimization via the lens of monotone operator theory. We feel that this perspective provides an ideal reference for the analysis of such algorithms while also providing a general framework for convex optimization solver design in turn. While this thesis is not the end of this branch of research, it indicates the potential of the monotone operator theory as a unifying method for the development and analysis of distributed optimization solutions.","Distributed Signal Processing; Convex Optimization; Monotone Operator Theory; Wireless Sensor Networks","en","doctoral thesis","","978-94-6384-041-5","","","","","","","","","Signal Processing Systems","","",""
"uuid:3836496f-7130-42a3-9703-3897f2d7bb0a","http://resolver.tudelft.nl/uuid:3836496f-7130-42a3-9703-3897f2d7bb0a","New Materials and Processes for Transport Applications: Going Hybrid and Beyond","Lehmhus, Dirk (Fraunhofer Institute for Manufacturing Technology and Advanced Materials IFAM); von Hehl, Axel (University of Bremen); Hausmann, Joachim (Erwin-Schroedinger); Kayvantash, Kambiz (Société CADLM); Alderliesten, R.C. (TU Delft Structural Integrity & Composites); Hohe, Jörg (Fraunhofer Institute for Mechanics of Materials (IWM))","","2019","The present text introduces a Special Section of Advanced Engineering Materials linked to the symposium Advanced Materials for Transport Applications organized by the authors within the framework of the EUROMAT 2017 conference. It introduces the contributions that make up this Special Section, and takes the fact that a majority of them is related to the broader topic of hybrid materials and structures as a motivation for a short overview of this exciting area of research.","aerospace industry; automotive industry; hybrid manufacturing processes; hybrid materials and structures; lightweight design; maritime industry; railway industry","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-09-01","","","Structural Integrity & Composites","","",""
"uuid:ea79ba64-262f-4696-abda-f7d143b97bc9","http://resolver.tudelft.nl/uuid:ea79ba64-262f-4696-abda-f7d143b97bc9","Planning under Uncertainty in Constrained and Partially Observable Environments","Walraven, E.M.P. (TU Delft Algorithmics)","Spaan, M.T.J. (promotor); Witteveen, C. (promotor); Delft University of Technology (degree granting institution)","2019","Developing intelligent decision making systems in the real world requires planning algorithms which are able to deal with sources of uncertainty and constraints. An example can be found in smart distribution grids, in which planning can be used to decide when electric vehicles charge their batteries, such that the capacity limits of lines are respected at all times. In this particular example there can be uncertainty in the arrival time and charging demand of vehicles, and constraints follow directly from the capacity limits of the distribution grid to which vehicles are connected. Existing algorithms for planning under uncertainty subject to constraints are currently not suitable for these types of applications, and therefore this dissertation aims improve the applicability of these algorithms by advancing the state of the art in constrained multi-agent planning under uncertainty. The dissertation presents new algorithmic techniques for exact POMDP planning, finite-horizon POMDPs and POMDPs with constraints. Additionally, the dissertation shows how models for constrained planning can be used in smart distribution grids.","planning under uncertainty; smart grids; markov decision process; partially observable markov decision process","en","doctoral thesis","","978-94-6384-034-7","","","","","","2019-05-27","","","Algorithmics","","",""
"uuid:4015bd83-0c4f-4951-b5b0-ec3cd44f7746","http://resolver.tudelft.nl/uuid:4015bd83-0c4f-4951-b5b0-ec3cd44f7746","System-level sub-20 nm planar and FinFET CMOS delay modelling for supply and threshold voltage scaling under process variation","Majzoub, S. (TU Delft Computer Engineering; University of Sharjah); Taouil, M. (TU Delft Computer Engineering); Hamdioui, S. (TU Delft Quantum & Computer Engineering)","","2019","Standard low power design utilizes a variety of approaches for supply and threshold control to reduce dynamic and idle power. At a very early stage of the design cycle, the Vdd and Vth values are estimated, based on the power budget, and then used to scale the delay and estimate the design performance. Furthermore, process variation in sub-20 nm feature technologies introduces a substantial impact on speed and power. Thus, the impact of such variation on the scaled delay has to also be considered in the performance estimation. In this paper, we propose a system-level model to estimate this delay, taking into consideration voltage scaling under within-die process variation for both planar and FinFET CMOS transistors in the sub-20 nm regime. The model is simple, has acceptable accuracy and is particularly useful for architectural-level simulations for low-power design exploration at an early stage in the design space exploration. The proposed model estimates the delay in different supply voltage and threshold voltage ranges. The model uses a modified alpha-power equation to measure the delay of the critical path of a computational logic core. The targeted technology nodes are 14 nm, 10 nm, and 7 nm for FinFETs, and 22 nm, and 16 nm for planar CMOS. Within-die process variation is assumed to be lumped in with the threshold voltage and the transistor channel length and width to simplify its impact on delay. For the given technology nodes, the average percentage error numbers of theproposed delay equation compared to hSpice are between 0.5% to 14%.","Alpha-Power Model; FinFET; Low-Power Design; Multi-V; Planar CMOS; Process Variation; System-Level Modelling; Voltage Scaling; Within-Die Variation","en","journal article","","","","","","","","","","Quantum & Computer Engineering","Computer Engineering","","",""
"uuid:bd62b547-e175-4c3c-9699-65785e3d2437","http://resolver.tudelft.nl/uuid:bd62b547-e175-4c3c-9699-65785e3d2437","Elimination of unsteady background reflections in PIV images by anisotropic diffusion","Adatrao, S. (TU Delft Aerodynamics); Sciacchitano, A. (TU Delft Aerodynamics)","","2019","A novel approach is introduced that allows the elimination of undesired laser light reflections from particle image velocimetry (PIV) images. The approach relies upon anisotropic diffusion of the light intensity, which is used to generate a background image to be subtracted from the original image. The intensity is diffused only along the edges and not across the edges, thus allowing one to preserve, in the background image, the shape of boundaries as laser light reflections on solid surfaces. Due to its ability to produce a background image from a single snapshot, as opposed to most methods that make use of intensity information in time, the technique is particularly suitable for elimination of reflections in PIV images of unsteady models, such as transiting objects, propellers, flapping and pitching wings. The technique is assessed on an experimental test case which considers the flow in front of a propeller, where the laser light reflections on the model's surface preclude accurate determination of the flow velocity. Comparison of the anisotropic diffusion approach with conventional techniques for suppression of light reflections shows the advantages of the former method, especially when reflections need to be removed from individual images.","particle image velocimetry; image pre-processing; anisotropic diffusion; background removal; unsteady light reflections","en","journal article","","","","","","","","","","","Aerodynamics","","",""
"uuid:e52cc182-457c-4687-baee-d0f72af36950","http://resolver.tudelft.nl/uuid:e52cc182-457c-4687-baee-d0f72af36950","Graph-time signal processing: Filtering and sampling strategies","Isufi, E. (TU Delft Signal Processing Systems)","Leus, G.J.T. (promotor); Delft University of Technology (degree granting institution)","2019","The necessity to process signals living in non-Euclidean domains, such as signals defined on the top of a graph, has led to the extension of signal processing techniques to the graph setting. Among different approaches, graph signal processing distinguishes itself by providing a Fourier analysis of these signals. Analogously to the Fourier transform for time and image signals, the graph Fourier transform decomposes the graph signals decomposes in terms of the harmonics provided by the underlying topology. For instance, a graph signal characterized by a slow variation between adjacent nodes has a low frequency content.
Along with the graph Fourier transform, graph filters are the key tool to alter the graph frequency content of a graph signal. This thesis focuses on graph filters that are performed distributively in the node domain–that is, each node needs to exchange information only within its neighbor to perform a given filtering operation. Similarly to the classical filters, we propose ways to design and implement distributed finite impulse response and infinite impulse response graph filters.
One of the key contributions of this thesis is to bring the temporal dimension to graph signal processing and build upon a graph-time signal processing framework. This is done in different ways. First, we analyze the effects that the temporal variations on the graph signal and graph topology have on the filtering output. Second, we introduce the notion of joint graph-time filtering. Third, we presentpr a statistical analysis of the distributed graph filtering when the graph signal and the graph topology change randomly in time. Finally, we extend the sampling framework from the reconstruction of graph signals to the observation and tracking of time-varying graph processes.
We characterize the behavior of the distributed autoregressivemoving average (ARMA) graph filters when the graph signal and the graph topology are time-varying. The latter analysis is exploited in two ways: i ) to quantify the limitations of graph filters in a dynamic environment, such as a moving sensors processing a time-varying signal in a sensor network; and i i ) to provide ways for filtering with low computation and communication complexity time-varying graph signals.
We develop the notion of distributed graph-time filtering, which is an operation that jointly processes the graph frequencies of a time-varying graph signal on one hand and its temporal frequencies on the other hand. We propose distributed finite impulse response and infinite impulse response recursions to implement a two-dimensional graphtime filtering operation. Finally, we propose design strategies to find the filter coefficients that approximate a desired two-dimensional frequency response.
We extend the analysis of graph filters to a stochastic environment, i.e., when the graph topology and the graph signal change randomly over time. By characterizing the first and second order moments of the filter output, we quantify the impact of the graph signal and the graph topology randomness into the distributed filtering operation. The latter allows us to develop the notion of graph filtering in the mean, which is also used to ease the computational burden of classical graph filters.
Finally, we propose a sampling framework for time-varying graph signals. Particularly, when the graph signal changes over time following a state-space model, we extend the graph signal sampling theory to the tasks of observing and tracking the time-varying graph signal froma few relevant nodes. The latter theory considers the graph signal sampling as a particular case and shows that tools from sparse sensing and sensor selection can be used for sampling.
k are drawn i.i.d. from jump measure μ. A high-dimensional wavelet series prior for the Lévy measure ν = λμ is devised and the posterior distribution arises from observing discrete samples Y Δ, Y 2Δ, …, Y nΔ at fixed observation distance Δ, giving rise to a nonlinear inverse inference problem. We derive contraction rates in uniform norm for the posterior distribution around the true Lévy density that are optimal up to logarithmic factors over Hölder classes, as sample size n increases. We prove a functional Bernstein–von Mises theorem for the distribution functions of both μ and ν, as well as for the intensity λ, establishing the fact that the posterior distribution is approximated by an infinite-dimensional Gaussian measure whose covariance structure is shown to attain the information lower bound for this inverse problem. As a consequence posterior based inferences, such as nonparametric credible sets, are asymptotically valid and optimal from a frequentist point of view.","Bayesian nonlinear inverse problems; compound Poisson processes; L´evy processes; asymptotics of nonparametric Bayes procedures; Compound Poisson processes; Asymptotics of nonparametric Bayes procedures; Lévy processes","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:ec2003f6-656f-4bb4-8ddc-fadb42bec488","http://resolver.tudelft.nl/uuid:ec2003f6-656f-4bb4-8ddc-fadb42bec488","Sprinting Out of Stuckness: Overcoming Moments of Stuckness to Support the Creativity Flow in Agile Team Settings","Shah, Ashni (Student TU Delft); Huidobro Pereda, Alfonso (Student TU Delft); Gonçalves, M. (TU Delft Methodologie en Organisatie van Design)","Badke Schaub, Petra (editor); Kleinsmann, Maaike (editor)","2019","Multidisciplinary agile teams working in fast paced, delivery-oriented sprint cycles of two weeks can experience moments of stuckness. Typically, these moments can be characterised by the inability to continue, which can be quite detrimental in agile settings, where time is pressured. This paper aims to explore these moments of stuckness, to understand when and why they occur and to understand the different strategies Scrum teams use to overcome them, both on a personal as well as team level. A combination of interviews and observations were conducted with six Scrum team members and two experts to understand their process and experiences while working in an agile set up. We have identified five strategies, which strongly rely on agile values of collaboration, communication, and creativity. These are: looking for expert guidance, open communication, creating spike stories, visual communication and incubation. The findings from this study provide both practice and academia with a deeper understanding into how can creativity be supported in agile settings.","Agile; Creativity; Design process; Scrum; Teamwork","en","conference paper","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:53caa9e3-6672-4f52-9596-7b4a959e515a","http://resolver.tudelft.nl/uuid:53caa9e3-6672-4f52-9596-7b4a959e515a","Fast nonlinear Fourier transform algorithms using higher order exponential integrators","Chimmalgi, S. (TU Delft Team Raf Van de Plas); Prins, Peter J. (TU Delft Team Raf Van de Plas); Wahls, S. (TU Delft Team Raf Van de Plas)","","2019","The nonlinear Fourier transform (NFT) has recently gained significant attention in fiber optic communications and other engineering fields. Although several numerical algorithms for computing the NFT have been published, the design of highly accurate low-complexity algorithms remains a challenge. In this paper, we present new fast forward NFT algorithms that achieve accuracies that are orders of magnitudes better than current methods, at comparable run times and even for moderate sampling intervals. The new algorithms are compared to existing solutions in multiple, extensive numerical examples.","Nonlinear Fourier transform; transforms for signal processing; nonlinear signal processing; Eigenvalues and eigenfunctions","en","journal article","","","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:a3c2dcff-6639-4600-abb7-256608852567","http://resolver.tudelft.nl/uuid:a3c2dcff-6639-4600-abb7-256608852567","A multi-level model on automated vehicle acceptance (MAVA): a review-based study","Nordhoff, S. (TU Delft Transport and Planning; Innovation Centre for Mobility and Societal Change); Kyriakidis, M. (Paul Scherrer Institut); van Arem, B. (TU Delft Transport and Planning); Happee, R. (TU Delft Intelligent Vehicles)","","2019","Automated vehicle acceptance (AVA) is a necessary condition for the realisation of higher-level objectives such as improvements in road safety, reductions in traffic congestion and environmental pollution. On the basis of a systematic literature review of 124 empirical studies, the present study proposes MAVA, a multi-level model to predict AVA. It incorporates a process-oriented view on AVA, considering acceptance as the result of a four-stage decision-making process that ranges from the exposure of the individual to automated vehicles (AVs) in Stage 1, the formation of favourable or unfavourable attitudes towards AVs in Stage 2, making the decision to adopt or reject AVs in Stage 3, to the implementation of AVs into practice in Stage 4. MAVA incorporates 28 acceptance factors that represent seven main acceptance classes. The acceptance factors are located at two levels, i.e., micro and meso. Factors at the micro-level constitute individual difference factors (i.e., socio-demographics, personality and travel behaviour). The meso-level captures the exposure of individuals to AVs, instrumental domain-specific, symbolic-affective and moral-normative factors of AVA. The literature review revealed that 6% of the studies investigated the exposure of individuals to AVs (i.e., knowledge and experience). 22% of the studies investigated domain-specific factors (i.e., performance and effort expectancy, safety, facilitating conditions, and service and vehicle characteristics), 4% symbolic-affective factors (i.e., hedonic motivation and social influence), and 12% moral-normative factors (i.e., perceived benefits and risks). Factors related to a person’s socio-demographic profile, travel behaviour and personality were investigated by 28%, 15% and 14% of the studies, respectively. We recommend that future studies empirically verify MAVA using longitudinal or experimental studies.","Automated driving system (ADS)-dedicated vehicles (DVs); automated public transport; automated vehicle acceptance; multi-level model; process-oriented view on acceptance","en","review","","","","","","","","","","","Transport and Planning","","",""
"uuid:21902836-4dea-4609-b65f-eddb23d2a7cc","http://resolver.tudelft.nl/uuid:21902836-4dea-4609-b65f-eddb23d2a7cc","Social life cycle assessment of brine treatment in the process industry: A consequential approach case study","Tsalidis, G.A. (TU Delft Energie and Industrie); Korevaar, G. (TU Delft Energie and Industrie)","","2019","Social life cycle assessment (SLCA) was developed to complement the environmental life cycle assessment (LCA) and economic assessment. Contrary to LCA, SLCA is not yet standardized, and the consequential approach is little discussed in literature. This study aims to perform a consequential SLCA and investigate the applicability of the method in industrial decision making. The aforementioned assessment is done within the Zero Brine project, which works on zero liquid discharge technology for water, salt, and magnesium recovery from brine efluents. The developed SLCA systems are gate-to-gate, and the analysis is performed at two levels: Hotspot and site-specific. The system boundaries consist of a demineralized water (DW) production company, a chlor-alkali company, an electricity provider, a magnesium distributor in the Netherlands, and a Russian mining company. The latter exists only in the boundaries before the change due to the Zero Brine project, because recovered magnesium is expected to replace the Russian magnesium imported in the Netherlands. Within the system boundaries, the stakeholders contributing the most are the DW and the magnesium distributor companies. The former produces the brine and thus recovers the magnesium and salt. The latter is the exclusive distributor of Russian magnesium in the Netherlands. Overall, we find that the recovered magnesium results in improving social performance mainly in ""Freedom of association and collective bargaining"", ""Fair salary"", and ""Health and Safety"" due to decreasing the dependency of the Netherlands on Russia, while increasing operation in a country with much stronger environmental regulation and corporate commitment to sustainability issues. Modelling with SLCA may not result in the expected societal benefits, as the Russian community and workers may not benefit due to the large geographical boundaries of the system under study. Nevertheless, the application of the consequential approach can be considered suitable, yet complicated, for offering decision makers adequate social information. We recommend that decision makers in the DW company invest in magnesium recovery and that decision makers in the magnesium distributor company distribute the recovered magnesium.","Brine; Consequential approach; Magnesium; Process industry; Social life cycle assessment","en","journal article","","","","","","","","","","","Energie and Industrie","","",""
"uuid:202a5d07-bbf5-4db8-8e68-681bf766cb5a","http://resolver.tudelft.nl/uuid:202a5d07-bbf5-4db8-8e68-681bf766cb5a","A process based model of cohesive sediment resuspension under bioturbators’ influence","Cozzoli, Francesco (University of Salento; NIOZ Royal Netherlands Institute for Sea Research); Gjoni, Vojsava (University of Salento); Del Pasqua, Michela (University of Salento); Hu, Zhan (Sun Yat-sen University; Southern Laboratory of Ocean Science and Engineering (Guangdong, Zhuhai)); Ysebaert, Tom (Wageningen University & Research; NIOZ Royal Netherlands Institute for Sea Research); Herman, P.M.J. (TU Delft Environmental Fluid Mechanics; Deltares); Bouma, T.J. (NIOZ Royal Netherlands Institute for Sea Research; Universiteit Utrecht)","","2019","Macrozoobenthos may affect sediment stability and erodibility via their bioturbating activities, thereby impacting both the short- and long-term development of coastal morphology. Process-based models accounting for the effect of bioturbation are needed for the modelling of erosion dynamics. With this work, we explore whether the fundamental allometric principles of metabolic activity scaling with individual and population size may provide a framework to derive general patterns of bioturbation effect on cohesive sediment resuspension. Experimental flumes were used to test this scaling approach across different species of marine, soft-sediment bioturbators. The collected dataset encompasses a range of bioturbator functional diversity, individual densities, body sizes and overall population metabolic rates. Measurements were collected across a range of hydrodynamic stress from 0.02 to 0.25 Pa. Overall, we observed that bioturbators are able to slightly reduce the sediment resuspension at low hydrodynamic stress, whereas they noticeably enhance it at higher levels of stress. Along the whole hydrodynamic stress gradient, the quantitative effect of bioturbators on sediment resuspension can be efficiently described by the overall metabolic rate of the bioturbating benthic communities, with significant variations across the bioturbators’ taxonomic and functional diversity. One of the tested species (the gallery-builder Polychaeta Hediste diversicolor) had an effect that was partially deviating from the general trend, being able to markedly reduce sediment resuspension at low hydrodynamic stress compared to other species. By combining bioturbators’ influence with hydrodynamic force, we were able to produce a process-based model of biota-mediated sediment resuspension.","Annular flumes; Bioturbation; Metabolism; Process-based model; Sediment resuspension","en","journal article","","","","","","Accepted Author Manuscript","","2021-03-19","","","Environmental Fluid Mechanics","","",""
"uuid:7e2b8cd2-5bc3-4c44-8615-2c83437eb50c","http://resolver.tudelft.nl/uuid:7e2b8cd2-5bc3-4c44-8615-2c83437eb50c","On the Limits of Finite-Time Distributed Consensus Through Successive Local Linear Operations","Coutino, Mario (TU Delft Signal Processing Systems; RIKEN Center for Emergent Matter Science (CEMS)); Isufi, E. (TU Delft Multimedia Computing; TU Delft Signal Processing Systems); Maehara, Takanori (RIKEN Center for Emergent Matter Science (CEMS)); Leus, G.J.T. (TU Delft Signal Processing Systems)","Matthews, Michael B. (editor)","2019","In this work, we explore the limits of finite-time distributed consensus through the intersection of graph filters and matrix function theory. We focus on algorithms capable to compute the consensus exactly through filtering operations over a graph, and that have been proven to converge in finite time. In this context, we show that there exists an algebraic algorithm that can minimize the minimum polynomial of a matrix whose support is known. Different from previous works, we leverage the structure of matrices that share the same support and are diagonalizable by the eigenbasis of the graph shift operator to prove a theoretical result with respect to the minimum number of diffusion steps required to reach consensus. We show that the previously known bound on the number of consensus iterations can be further reduced in accordance to the algebraic properties of the matrix representation of the network. Finally, insights with respect to the relation between the graph topology and the algebraic properties of such matrices are provided in order to encourage further discussion on the role of eigenvalues and eigenvectors in the network topology.","Consensus; distributed averaging; graph filters; signal processing over networks","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-08-21","","","Signal Processing Systems","","",""
"uuid:c8aa0421-e47f-48c3-8611-9bd8a55d6f2c","http://resolver.tudelft.nl/uuid:c8aa0421-e47f-48c3-8611-9bd8a55d6f2c","The Zig-Zag process and super-efficient sampling for Bayesian analysis of big data","Bierkens, G.N.J.C. (TU Delft Statistics); Fearnhead, Paul (Lancaster University); Roberts, Gareth (University of Warwick)","","2019","Standard MCMC methods can scale poorly to big data settings due to the need to evaluate the likelihood at each iteration. There have been a number of approximate MCMC algorithms that use sub-sampling ideas to reduce this computational burden, but with the drawback that these algorithms no longer target the true posterior distribution. We introduce a new family of Monte Carlo methods based upon a multidimensional version of the Zig-Zag process of [Ann. Appl. Probab. 27 (2017) 846–882], a continuous-time piecewise deterministic Markov process. While traditional MCMC methods are reversible by construction (a property which is known to inhibit rapid convergence) the Zig-Zag process offers a flexible nonreversible alternative which we observe to often have favourable convergence properties. We show how the Zig-Zag process can be simulated without discretisation error, and give conditions for the process to be ergodic. Most importantly, we introduce a sub-sampling version of the Zig-Zag process that is an example of an exact approximate scheme, that is, the resulting approximate process still has the posterior as its stationary distribution. Furthermore, if we use a control-variate idea to reduce the variance of our unbiased estimator, then the Zig-Zag process can be super-efficient: after an initial preprocessing step, essentially independent samples from the posterior distribution are obtained at a computational cost which does not depend on the size of the data.","MCMC; nonreversible Markov process; piecewise deterministic Markov process; stochastic gradient Langevin dynamics; sub-sampling; exact sampling","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:b20bcffa-2e3f-46d7-a678-83ceb6eff228","http://resolver.tudelft.nl/uuid:b20bcffa-2e3f-46d7-a678-83ceb6eff228","Terra–ink additive earth manufacturing for emergency architecture","Venturini, T. (TU Delft Applied Mechanics); Turrin, M. (TU Delft Design Informatics); Setaki, F. (TU Delft Environmental Technology and Design); Veer, F.A. (TU Delft Structural Design & Mechanics); Pronk, Arno (Eindhoven University of Technology); Teuffel, Patrick (Eindhoven University of Technology); Moonen, Yaron (Eindhoven University of Technology); Slangen, Stefan (Eindhoven University of Technology); Vorstermans, Rens (Eindhoven University of Technology)","","2019","In recent years, natural disaster and military conflicts forced vast numbers of people to flee their home countries, contributing to the migration crisis we are facing today. According to the UNHCR, the number of forcibly displaced people worldwide reached the highest level since World War II. Post-disaster housing is by nature diverse and dynamic, having to satisfy unique socio-cultural and economical requirements. Currently, however, housing emergencies are tackled inefficiently. Post-disaster housing strategies are characterized by a high economic impact and waste production, and a low adaptability to location-based needs. As an outcome, low quality temporary shelters are provided, which often exceed by far their serving time. Focusing on temporary shelters suitable for the transitioning period between emergency accommodation and permanent housing, TERRA-ink addresses new construction methods that allow for time and cost efficiency, but also for flexibility to adapt to different contexts. TERRA-ink aims to develop a method for layering local soil, by implementing 3D printing technologies. With the aid of such a construction system, the goal is to create durable structures that can be easily de-constructed once they served their purpose. The use of locally sourced materials in combination with additive manufacturing is investigated aiming at reductions in financial investments, resources and human labor, as well as at simplified logistics, low environmental impact and adaptability to different situations and requirements. Such a building system has the potential of combining low-and high-tech technologies, in order to facilitate a fully open and universal solution for large scale 3D-printing using any type of soil.","Emergency; Extrusion; Material; Mixture; Process; Soil; Structure; Temporary","en","journal article","","","","","","Energy Innovation #5: 4TU.BOUW Lighthouse projects + PDEng ISBN 978-94-6366-246-8","","","","","Design Informatics","","",""
"uuid:db3a5347-1078-4914-b8cb-df1ee87b1696","http://resolver.tudelft.nl/uuid:db3a5347-1078-4914-b8cb-df1ee87b1696","Elimination of multiples from acoustic reflection data","Slob, E.C. (TU Delft Applied Geophysics and Petrophysics); Zhang, L. (TU Delft Applied Geophysics and Petrophysics)","","2019","Elimination of multiples from acoustic reflection data is important to reduce the effect of their presence in velocity model building and subsequent imaging. Many processing schemes assume only primary reflection events are present in the data. Free-surface multiple elimination is an established technology, but internal multiple elimination is under development. We show that new data-driven processing methods have led to a robust multiple elimination scheme. This scheme removes free-surface and internal multiples contemporarily but can also eliminate internal multiples after free-surface multiples elimination. For each recording time instant, the method computes two filters using only the measured reflection response and an estimate of the source time signature. Once the filters are computed, they are used to filter the data up to that time instant. The result is that multiples related to reflectors with a two-way travel time less than the chosen time instant are removed from the data. This removes possible overlap with the primary reflection from the first deeper reflector. This event can be taken and stored in a new dataset. Repeating the procedure for all recording times produces the desired primaries only dataset. A numerical and a field data example show the effectiveness of the method.","multiple elimination; processing; acoustic","en","conference paper","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-04-09","","","Applied Geophysics and Petrophysics","","",""
"uuid:c0a1eea6-e9bd-4cf7-be70-3241f6d81893","http://resolver.tudelft.nl/uuid:c0a1eea6-e9bd-4cf7-be70-3241f6d81893","Controllability of bandlimited graph processes over random time varying graphs","Gama, F. (University of Pennsylvania); Isufi, E. (TU Delft Multimedia Computing); Ribeiro, Alejandro (University of Pennsylvania); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2019","Controllability of complex networks arises in many technological problems involving social, financial, road, communication, and smart grid networks. In many practical situations, the underlying topology might change randomly with time, due to link failures such as changing friendships, road blocks or sensor malfunctions. Thus, it leads to poorly controlled dynamics if randomness is not properly accounted for. We consider the problem of controlling the network state when the topology varies randomly with time. Our problem concerns target states that are bandlimited over the graph; these are states that have nonzero frequency content only on a specific graph frequency band. We thus leverage graph signal processing and exploit the bandlimited model to drive the network state from a fixed set of control nodes. When controlling the state from a few nodes, we observe that spurious, out-of-band frequency content is created. Therefore, we focus on controlling the network state over the desired frequency band, and then use a graph filter to get rid of the unwanted frequency content. To account for the topological randomness, we develop the concept of controllability in the mean, which consists of driving the expected network state towards the target state. A detailed mean squared error analysis is performed to quantify the statistical deviation between the final controlled state on a particular graph realization and the actual target state. Finally, we propose different control strategies and evaluate their effectiveness on synthetic network models and social networks.","graph process; Graph signal processing; graph signals; linear systems on graphs; network controllability; random graphs","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-06-15","","","Multimedia Computing","","",""
"uuid:3689aa94-0704-408d-a7ee-0a85792ba8f5","http://resolver.tudelft.nl/uuid:3689aa94-0704-408d-a7ee-0a85792ba8f5","Exact Network Reconstruction from Complete SIS Nodal State Infection Information Seems Infeasible","Prasse, B. (TU Delft Network Architectures and Services); Van Mieghem, P.F.A. (TU Delft Network Architectures and Services)","","2019","The SIS dynamics of the spread of a virus crucially depend on both the network topology and the spreading parameters. Since neither the topology nor the spreading parameters are known for the majority of applications, they have to be inferred from observations of the viral spread. We propose an inference method for both topology and spreading parameters based on a maximum-a-posteriori estimation approach for the sampled-time Markov chain of an SIS process. The resulting estimation problem, given by a mixed-integer optimisation problem, results in exponential computational time if a brute-force approach is employed. By introducing an efficient and accurate, polynomial-time heuristic, the topology of the network can almost always be exactly reconstructed. Notwithstanding, reconstructing the network with a reasonably high accuracy requires a subexponentially increasing number of observations and an exponentially increasing computation time with respect to the number of nodes N. Such long observation periods are hardly realistic, which justifies the claim in the title.","Bayesian estimation; network reconstruction; SIS process; spreading parameter estimation","en","journal article","","","","","","","","","","","Network Architectures and Services","","",""
"uuid:ba02a76b-ee42-48d9-8500-323d9d858e8a","http://resolver.tudelft.nl/uuid:ba02a76b-ee42-48d9-8500-323d9d858e8a","Morphodynamic Resilience of Intertidal Mudflats on a Seasonal Time Scale","van der Wegen, M. (IHE Delft Institute for Water Education; Deltares); Roelvink, D. (TU Delft Coastal Engineering; IHE Delft Institute for Water Education; Deltares); Jaffe, B. E. (Pacific Coastal and Marine Science Center)","","2019","Intertidal mudflats are morphodynamic features present in many estuaries worldwide. Often located between vegetated shores and deep channels they comprise valuable ecosystems and serve to protect the hinterland by attenuating waves. Although mudflats are persistently present on yearly to decadal time scales, little is known on their morphodynamic adaptation to short-term variations in forcing such as storms, spring-neap tidal cycles, and sediment supply. This study aims to explore the morphodynamic resilience of mudflats to seasonal variations in forcing. First, we compare transects observed in South Bay, California, at 3- to 6-monthly intervals. Second, we present the results of a process-based, morphodynamic profile model (Mflat). Mflat is an open source, Matlab code that describes both cross-shore and alongshore tidal hydrodynamics as well as a stationary wave model. An advection-diffusion equation solves sediment transport while bed level changes occur by the divergence of the sediment transport field. Mflat reproduces the observed South San Francisco Bay profile in equilibrium with significant skill. Short-term variations in hydrodynamic forcing and sediment characteristics disturb the profile mainly at the channel-shoal edge. The modeled profile disturbance is consistent with observations. The modeled profile is remarkably resilient since it recovers to the equilibrium profile within weeks to months. The model results suggest that 3-monthly observation intervals are probably too long to discriminate processes responsible for the profile disturbance. These processes may include variations in sediment supply, mudflat erodibility, and wave action as well as the spring-neap tidal cycle.","channel-shoal interactions; coastal resilience; estuarine morphodynamics; intertidal area; mudflats; process-based modeling","en","journal article","","","","","","","","2020-05-26","","","Coastal Engineering","","",""
"uuid:e4a9c9bf-a0a4-472c-856c-1e9cd2cb48e4","http://resolver.tudelft.nl/uuid:e4a9c9bf-a0a4-472c-856c-1e9cd2cb48e4","The Viral State Dynamics of the Discrete-Time NIMFA Epidemic Model","Prasse, B. (TU Delft Network Architectures and Services); Van Mieghem, P.F.A. (TU Delft Network Architectures and Services)","","2019","The majority of research on epidemics relies on models which are formulated in continuous-time. However, processing real-world epidemic data and simulating epidemics is done digitally and the continuous-time epidemic models are usually approximated by discrete-time models. In general, there is no guarantee that properties of continuous-time epidemic models, such as the stability of equilibria, also hold for the respective discrete-time approximation. We analyse the discrete-time NIMFA epidemic model on directed networks with heterogeneous spreading parameters. In particular, we show that the viral state is increasing and does not overshoot the steady-state, the steady-state is exponentially stable, and we provide linear systems that bound the viral state evolution. Thus, the discrete-time NIMFA model succeeds to capture the qualitative behaviour of a viral spread and provides a powerful means to study real-world epidemics.","Epidemic processes; nonlinear systems","en","journal article","","","","","","","","","","","Network Architectures and Services","","",""
"uuid:f3d27996-7301-4063-ba82-331d609f0908","http://resolver.tudelft.nl/uuid:f3d27996-7301-4063-ba82-331d609f0908","Prediction of density and volume variation of hematite ore particles during in-flight melting and reduction","Chen, Z. (TU Delft (OLD) MSE-3); Qu, Ying xia (Northeastern University); Zeilstra, Christiaan (Tata Steel); Van Der Stel, Jan (Tata Steel); Sietsma, J. (TU Delft Materials Science and Engineering; TU Delft (OLD) MSE-3); Yang, Y. (TU Delft (OLD) MSE-3)","","2019","HIsarna is a promising ironmaking technology to reduce CO 2 emission. Information of phase transformation is essential for reaction analysis of the cyclone reactor of the HIsarna process. In addition, data of density and volume of the ore particles are necessary for estimation of the residence time of the particles in the cyclone reactor. Phase transformation of iron ore particles was experimentally studied in a drop-tube furnace under simulated cyclone conditions and compared with thermodynamic calculation. During the pre-reduction process inside the reactor, the mineralogy of iron ore particles transforms sequentially from hematite to sub-oxides. The density changes of the particles during the melting and reduction can be predicted based on the phase composition and temperature. Therefore, density models in the studies were evaluated with reported experimental data of slag. As a result, a more reliable density model was developed to calculate the density of the formed slag containing mainly FeO–Fe 2 O 3 . The density and volume of the partially reduced ore particles or melt droplets were estimated based on this model. The results show that the density of the ore particles decreases by 15.1% at most along the progressive reduction process. Furthermore, the model results also indicate that heating, melting and reduction of the ore could lead to 6.63–9.37% swelling of the particles, which is mostly contributed by thermal expansion. It would result in corresponding variation in velocity of the ore particles or melt droplets during the flight inside the reactor.","Hematite ore particle; HIsarna; Ironmaking process; Ore density change; Phase transformation; Smelting reduction","en","journal article","","","","","","","","","","Materials Science and Engineering","(OLD) MSE-3","","",""
"uuid:04cf8fe3-8d12-4925-94ea-2eb36f4776d0","http://resolver.tudelft.nl/uuid:04cf8fe3-8d12-4925-94ea-2eb36f4776d0","Bas-Relief Modeling from Normal Layers","Wei, Mingqiang (Nanjing University of Aeronautics and Astronautics); Tian, Yang (The Chinese University of Hong Kong; Chinese Academy of Sciences); Pang, Wai-Man (Caritas Institute of Higher Education); Wang, C.C. (TU Delft Materials and Manufacturing); Pang, Ming-Yong (Nanjing Normal University); Wang, Jun (Nanjing University of Aeronautics and Astronautics); Qin, Jin (The Hong Kong Polytechnic University); Heng, Pheng-Ann (The Chinese University of Hong Kong; Chinese Academy of Sciences)","","2019","Bas-relief is characterized by its unique presentation of intrinsic shape properties and/or detailed appearance using materials raised up in different degrees above a background. However, many bas-relief modeling methods could not manipulate scene details well. We propose a simple and effective solution for two kinds of bas-relief modeling (i.e., structure-preserving and detail-preserving), which is different from the prior tone mapping alike methods. Our idea originates from an observation on typical 3D models which are decomposed into a piecewise smooth base layer and a detail layer in normal field. Proper manipulation of the two layers contributes to both structure-preserving and detail-preserving bas-relief modeling. We solve the modeling problem in a discrete geometry processing setup that uses normal-based mesh processing as a theoretical foundation. Specifically, using the two-step mesh smoothing mechanism as a bridge, we transfer the bas-relief modeling problem into a discrete space, and solve it in a least-squares manner. Experiments and comparisons to other methods show that (i) geometry details are better preserved in the scenario with high compression ratios, and (ii) structures are clearly preserved without shape distortion and interference from details.","Bas-relief modeling; detail-preserving; discrete geometry processing; normal decomposition; structure-preserving","en","journal article","","","","","","Accepted author manuscript","","","","","Materials and Manufacturing","","",""
"uuid:73a5c71f-422b-4810-9ab7-be2f1baff035","http://resolver.tudelft.nl/uuid:73a5c71f-422b-4810-9ab7-be2f1baff035","Time-division Multiplexing Automata Processor","Yu, J. (TU Delft Computer Engineering); Du Nguyen, H.A. (TU Delft Computer Engineering); Abu Lebdeh, M.F.M. (TU Delft Computer Engineering); Taouil, M. (TU Delft Computer Engineering); Hamdioui, S. (TU Delft Computer Engineering)","","2019","Automata Processor (AP) is a special implementation of non-deterministic finite automata that performs pattern matching by exploring parallel state transitions. The implementation typically contains a hierarchical switching network, causing long latency. This paper proposes a methodology to split such a hierarchical switching network into multiple pipelined stages, making it possible to process several input sequences in parallel by using time-division multiplexing. We use a new resistive RAM based AP (instead of known DRAM or SRAM based) to illustrate the potential of our method. The experimental results show that our approach increases the throughput by almost a factor of 2 at a cost of marginal area overhead.","automata; parallel processing; time-devision multiplexing","en","conference paper","IEEE","","","","","","","","","","Computer Engineering","","",""
"uuid:325db611-5522-4ab1-a2dc-066dde42a509","http://resolver.tudelft.nl/uuid:325db611-5522-4ab1-a2dc-066dde42a509","Controllable Motion-Blur Effects in Still Images","Luo, X. (TU Delft Computer Graphics and Visualisation); Ziliotto Salamon, N. (TU Delft Computer Graphics and Visualisation); Eisemann, E. (TU Delft Computer Graphics and Visualisation)","","2019","Motion blur in a photo is the consequence of object motion during the image acquisition. It results in a visible trail along the motion of a recorded object and can be used by photographers to convey a sense of motion. Nevertheless, it is very challenging to acquire this effect as intended and requires much experience from the photographer. To achieve actual control over the motion blur, one could be added in a post process but current solutions require complex manual intervention and can lead to artifacts that mix moving and static objects incorrectly. In this paper, we propose a novel method to add motion blur to a single image that generates the illusion of a photographed motion. Relying on a minimal user input, a filtering process is employed to produce a virtual motion effect. It carefully handles object boundaries to avoid artifacts produced by standard filtering methods. We illustrate the effectiveness of our solution with various complex examples, including multi-directional blur, reflections, multiple objects, and illustrate how several motion-related artistic effects can be achieved. Our post-processing solution is an alternative to capturing the intended real-world motion blur directly and enables fine-grained control of the motion-blur effect.","Motion blur; image processing; long exposure; post-production","en","journal article","","","","","","This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.","","","","","Computer Graphics and Visualisation","","",""
"uuid:0f5c25f0-86ed-40cb-8012-33ae6c835e15","http://resolver.tudelft.nl/uuid:0f5c25f0-86ed-40cb-8012-33ae6c835e15","Frame-based Programming, Stream-Based Processing for Medical Image Processing Applications","Hoozemans, J.J. (TU Delft Computer Engineering); de Jong, Rob (Philips Healthcare Nederland); van der Vlugt, Steven (Philips Healthcare Nederland); van Straten, J. (TU Delft Computer Engineering); Elango, Uttam Kumar (Student TU Delft); Al-Ars, Z. (TU Delft Computer Engineering)","","2019","This paper presents and evaluates an approach to deploy image and video processing pipelines that are developed frame-oriented on a hardware platform that is stream-oriented, such as an FPGA. First, this calls for a specialized streaming memory hierarchy and accompanying software framework that transparently moves image segments between stages in the image processing pipeline. Second, we use softcore VLIW processors, that are targetable by a C compiler and have hardware debugging capabilities, to evaluate and debug the software before moving to a High-Level Synthesis flow. The algorithm development phase, including debugging and optimizing on the target platform, is often a very time consuming step in the development of a new product. Our proposed platform allows both software developers and hardware designers to test iterations in a matter of seconds (compilation time) instead of hours (synthesis or circuit simulation time).","FPGA; Image processing; Medical imaging","en","journal article","","","","","","","","","","","Computer Engineering","","",""
"uuid:fb077db7-4298-47a0-b146-9fe111715881","http://resolver.tudelft.nl/uuid:fb077db7-4298-47a0-b146-9fe111715881","Domino effects in chemical factories and clusters: An historical perspective and discussion","Swuste, P.H.J.J. (TU Delft Safety and Security Science); van Nunen, K. (Universiteit Antwerpen); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science; Universiteit Antwerpen; Rijksinstituut voor Volksgezondheid en Milieu (RIVM)); Khakzad, N. (TU Delft Safety and Security Science)","","2019","Major accidents in Western countries, receiving a lot of media attention in the 1970s, are starting point for research into internal and external domino effects in the chemical and petrochemical sectors and clusters. Initially, these reports are published by government institutions and government-related research centres. With the upcoming quantitative risk analyses in the 1970s and 1980s, the so-called öcoloured books’, published in the Netherlands, play a prominent role in quantifying these domino effects. Since the mid 1990s, the second European Seveso Directive encourages scientific research on domino effects, shown in substantially growth of academic publications on the topic. Research in Western countries is dominated by risk assessments, probabilities, and failure mechanisms are calculated for the complex phenomenon of domino effects and its consequences. Previous works are closely related to political, official and private decision-making. A transition towards risk management is still in its infancy. A future transition is necessary to understand initial scenarios as starting points for domino effects.
In India a wake-up call for domino effects occurs in the mid-1990s. Chinese publications on domino effects in the international scientific press appear from the mid-2000s onwards. Due to a rapid industrialisation, the numbers in China country are overwhelming, versus chemical companies, as versus of many major accidents in this sector.
This article will discuss results of research on domino effects, conducted in the period 1966–2018, as well as major determinants of these accident processes. Also present, and future transition in this research domain will be discussed.","Domino-effects; Process industry; Chemical cluster; History; Review","en","review","","","","","","","","2021-02-11","","","Safety and Security Science","","",""
"uuid:2ba23185-5a96-408a-914f-a71f7751053f","http://resolver.tudelft.nl/uuid:2ba23185-5a96-408a-914f-a71f7751053f","Identifying strategic maintenance capacity for accidental damage occurrence in aircraft operations","Narayanan, Prasobh (Student TU Delft); Verhagen, W.J.C. (TU Delft Air Transport & Operations); Dhanisetty, V.S.V. (TU Delft Air Transport & Operations)","","2019","Airline operators face accidental damages on their fleet of aircraft as part of operational practice. Individual occurrences are hard to predict; consequently, the approach towards repairing accidental damage is reactive in aircraft maintenance practice. However, by aggregating occurrence data and predicting future occurrence rates, it is possible to predict future long-term (strategic) demand for maintenance capacity. In this paper, a novel approach for integration of reliability modelling and inventory control is presented. Here, the concept of a base stock policy has been translated to the maintenance slot capacity problem to determine long-term cost-optimal capacity. Demand has been modelled using a superposed Non-homogeneous Poisson Process (NHPP). A case study has been performed on damage data from a fleet of Boeing 777 aircraft. The results prove the feasibility of adopting an integrated approach towards strategic capacity identification, using real-life data to predict future damage occurrence and associated maintenance slot requirements.","aircraft maintenance; inventory control; stochastic process; strategic capacity identification","en","journal article","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:86c4cfe5-4b66-41c1-8095-86578d3435d4","http://resolver.tudelft.nl/uuid:86c4cfe5-4b66-41c1-8095-86578d3435d4","Distributed stochastic reserve scheduling in AC power systems with uncertain generation","Rostampour, Vahab (TU Delft Team Tamas Keviczky); Ter Haar, Ole (Student TU Delft); Keviczky, T. (TU Delft Team Tamas Keviczky)","","2019","This paper presents a framework to carry out multi-area stochastic reserve scheduling (RS) based on an AC optimal power flow (OPF) model with high penetration of wind power using distributed consensus and the alternating direction method of multipliers (ADMM). We first formulate the OPF-RS problem using semidefinite programming (SDP) in infinite dimensional spaces that are in general computationally intractable. Using a novel affine policy, we develop an approximation of the infinite dimensional SDP as a tractable finite dimensional SDP, and explicitly quantify the performance of the approximation. To this end, we adopt the recent developments in randomized optimization that allow a priori probabilistic feasibility guarantees to optimally schedule generating units while simultaneously determining the required reserve. We then use the geographical pattern of the power system to decompose the large-scale system into a multi-area power network, and provide a consensus ADMM algorithm to find a feasible solution for both local and overall multi-area network. Using our distributed stochastic framework, each area can use its own wind information to achieve local feasibility certificates, while ensuring overall feasibility of the multi-area power network under mild conditions. We provide numerical comparisons with a new benchmark formulation, the so-called converted DC (CDC) power flow model, using Monte Carlo simulations for two different IEEE case studies.","Generators; Optimization; Power systems; Probabilistic logic; Stochastic processes; Uncertainty; Wind power generation","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-05-01","","","Team Tamas Keviczky","","",""
"uuid:a6289987-d6d2-4377-95fa-bb1d8d8b5012","http://resolver.tudelft.nl/uuid:a6289987-d6d2-4377-95fa-bb1d8d8b5012","Surgical process modelling strategies: which method to choose for determining workflow?","Gholinejad, M. (TU Delft Medical Instruments & Bio-Inspired Technology); Loeve, A.J. (TU Delft Medical Instruments & Bio-Inspired Technology); Dankelman, J. (TU Delft Medical Instruments & Bio-Inspired Technology)","","2019","The vital role of surgeries in healthcare requires a constant attention to improvement. Surgical process modelling is an innovative and rather recently introduced approach for tackling the issues in today’s complex surgeries. This modelling field is very challenging and still under development, therefore, it is not always clear which modelling strategy would best fit the needs in which situations. The aim of this study was to provide a guide for matching the choice of modelling strategies for determining surgical workflows. In this work, the concepts associated with surgical process modelling are described, aiming to clarify them and to promote their use in future studies. The relationship of these concepts and the possible combinations of the suitable approaches for modelling strategies are elaborated and the criteria for opting for the proper modelling strategy are discussed.","surgical procedure; surgical process model; surgical workflow analysis","en","review","","","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:62518d75-2ede-4d68-b947-d508a8f4f521","http://resolver.tudelft.nl/uuid:62518d75-2ede-4d68-b947-d508a8f4f521","The Effect of Material Fresh Properties and Process Parameters on Buildability and Interlayer Adhesion of 3D Printed Concrete","Panda, Biranchi (Nanyang Technological University); Noor Mohamed, Nisar Ahamed (Nanyang Technological University); Chandra Paul, Suvash (Monash University Malaysia; Nanyang Technological University); Bhagath Singh, GVP (Swiss Federal Institute of Technology); Tan, Ming Jen (Nanyang Technological University); Šavija, B. (TU Delft Materials and Environment)","","2019","The advent of digital concrete fabrication calls for advancing our understanding of the interaction of 3D printing with material rheology and print parameters, in addition to developing new measurement and control techniques. Thixotropy is the main challenge associated with printable material, which offers high yield strength and low viscosity. The higher the thixotropy, the better the shape stability and the higher buildability. However, exceeding a minimum value of thixotropy can cause high extrusion pressure and poor interface bond strength if the printing parameters are not optimized to the part design. This paper aims to investigate the effects of both material and process parameters on the buildability and inter-layer adhesion properties of 3D printed cementitious materials, produced with different thixotropy and print head standoff distances. Nano particles are used to increase the thixotropy and, in this context, a lower standoff distance is found to be useful for improving the bond strength. The low viscosity “control” sample is unaffected by the variation in standoff distances, which is attributed to its flowability and low yield stress characteristics that lead to strong interfacial bonding. This is supported by our microscopic observations.","3D concrete printing; Bond strength; Process parameters; Thixotropy","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:7344f799-fb3c-4cc2-a986-816f243d6e19","http://resolver.tudelft.nl/uuid:7344f799-fb3c-4cc2-a986-816f243d6e19","You make it and you try it out: Seeds of design discipline futures","Lloyd, P.A. (TU Delft Methodologie en Organisatie van Design)","","2019","This paper takes a narrative seam through the design discipline, attempting to explain how design methodology, one of the three types of Nigel Cross' designerly ways of knowing, has changed over the 40 years of Design Studies. Specifically, the paper identifies the point when a ‘social turn’ in the discipline occurred, allowing more nuanced and critical studies of designing, and shifting the balance from an objective (‘scientific’) perspective to one more based on relativist approaches. The paper concludes by noting the plurality of present-day study, arguably enabled by design thinking, and sketches what this holds for the future of the discipline. The references in the paper are mainly restricted to those published in, or strongly relating to, Design Studies.","design methods; design process; design research; design studies; design thinking","en","journal article","","","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:0d29dedb-0554-4e84-8167-e63a472ca874","http://resolver.tudelft.nl/uuid:0d29dedb-0554-4e84-8167-e63a472ca874","Fabrication and characterization of polyimide-based 'smooth' titanium nitride microelectrode arrays for neural stimulation and recording","Oliveira Rodrigues, F.J. (TU Delft Electronic Components, Technology and Materials; University of Minho); Ribeiro, J.F. (University of Minho); Anacleto, P.A. (University of Minho); Fouchard, A. (Grenoble Institute of Neurosciences); David, O. (Grenoble Institute of Neurosciences); Sarro, Pasqualina M (TU Delft Electronic Components, Technology and Materials); Mendez, P.M. (University of Minho)","","2019","OBJECTIVE: As electrodes are required to interact with sub-millimeter neural structures, innovative microfabrication processes are required to enable fabrication of microdevices involved in such stimulation and/or recording. This requires the development of highly integrated and miniaturized systems, comprising die-integration-compatible technology and flexible microelectrodes. To elicit selective stimulation and recordings of sub-neural structures, such microfabrication process flow can beneficiate from the integration of titanium nitride (TiN) microelectrodes onto a polyimide substrate. Finally, assembling onto cuffs is required, as well as electrode characterization. APPROACH: Flexible TiN microelectrode array integration and miniaturization was achieved through microfabrication technology based on microelectromechanical systems (MEMS) and complementary metal-oxide semiconductor processing techniques and materials. They are highly reproducible processes, granting extreme control over the feature size and shape, as well as enabling the integration of on-chip electronics. This design is intended to enhance the integration of future electronic modules, with high gains on device miniaturization. MAIN RESULTS: (a) Fabrication of two electrode designs, (1) 2 mm long array with 14 TiN square-shaped microelectrodes (80 × 80 µm2), and (2) an electrode array with 2 mm × 80 µm contacts. The average impedances at 1 kHz were 59 and 5.5 kΩ, respectively, for the smaller and larger contacts. Both designs were patterned on a flexible substrate and directly interconnected with a silicon chip. (b) Integration of flexible microelectrode array onto a cuff electrode designed for acute stimulation of the sub-millimeter nerves. (c) The TiN electrodes exhibited capacitive charge transfer, a water window of -0.6 V to 0.8 V, and a maximum charge injection capacity of 154 ± 16 µC cm-2. SIGNIFICANCE: We present the concept, fabrication and characterization of composite and flexible cuff electrodes, compatible with post-processing and MEMS packaging technologies, which allow for compact integration with control, readout and RF electronics. The fabricated TiN microelectrodes were electrochemically characterized and exhibited a comparable performance to other state-of-the-art electrodes for neural stimulation and recording. Therefore, the presented TiN-on-polyimide microelectrodes, released from silicon wafers, are a promising solution for neural interfaces targeted at sub-millimeter nerves, which may benefit from future upgrades with die-electronic modules.","peripheral nerve interfaces; nerve cuff electrodes; die-compatible process; titanium nitride electrode; polyimide composite electrodes; sub-millimeter nerves","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-08-02","","","Electronic Components, Technology and Materials","","",""
"uuid:072d1809-b9e8-4677-975c-b37ea8434dd7","http://resolver.tudelft.nl/uuid:072d1809-b9e8-4677-975c-b37ea8434dd7","Context dependence of project management competences","Nijhuis, S. (TU Delft Landscape Architecture)","","2019","Higher Education is incorporating project management in their curricula, but literature on project management education, higher education practice and practitioner needs show little agreement (Nijhuis, 2017a). Guidance for curriculum design in Project Management is available (Task force on PM curricula, 2015), but also stresses that the development of the curricula should take the (local) context into consideration.Several studies points at a context dependence of project manager competences like leadership styles needed to achieve success (Turner & Müller, 2006) or competences required for success based on regional requirements (Turner, Müller, & Dulewicz, 2009).Several authors have tried to categorize projects (Busser, 2010; Crawford, Hobbs, & Turner, 2006; Dias, Tereso, Braga, & Fernandes, 2014; Dvir, Sadeh, & Malach-Pines, 2006; Müller & Turner, 2007), but on a whole not agreeing on what actually characterizes a project.This paper explores the concept of context from several points of view: does the project type (ICT vs engineering vs organizational) play a role in needed competences. Is there a difference between junior and senior project managers and what consequence could this have for higher education? Several sources of data are incorporated in this paper: 10 focus groups with experienced project managers revealing information on process and personal competences. A small survey on process and personal competences for junior project managers. A workshop with mixed experience project managers discussing the same items and several dedicated workshops aimed at finding ideas and perceptions of differences between junior and senior and contexts.","Project Management; Competences; Context; Attribute Competence; Process Competence","en","conference paper","Delft University of Technology","","","","","","","","","","Landscape Architecture","","",""
"uuid:06657aa6-b9ff-46ca-885c-91b8f8ca4ab5","http://resolver.tudelft.nl/uuid:06657aa6-b9ff-46ca-885c-91b8f8ca4ab5","The tacit design process in architectural design education","van Dooren, E.J.G.C. (TU Delft Architectural Engineering); van Dorst, M.J. (TU Delft Environmental Technology and Design); Asselbergs, M.F. (TU Delft Architectural Engineering); Van Merriënboer, J.J.G. (Universiteit Maastricht); Els, Boshuizen (Open University of the Netherlands; University of Turku)","","2019","The purpose of the architectural design studio is that students learn to think and act like designers. However, communication between teachers and students seems to be problem- atic. Teachers barely seem to explain how designers work, which may be confusing for stu- dents. To learn professional reasoning processes and strategies, different teaching activi- ties are involved, such as modelling, coaching, scaffolding, reflection, exploration and artic- ulation. In the design studio it seems tradition that teachers only ask questions, while not articulating the design process.
This paper focuses on the research question of whether teachers in architectural design education articulate the main ‘designerly’ actions and skills, performed by expert design- ers, and if so, to what extent and in which manner? To answer these questions video-re- cordings of 13 tutorial sessions are analysed with the help of an educational framework of five generic elements. The framework consists of the basic design process actions and skills, and is specifically developed as a vocabulary for making the design process explicit and to train students in the design process elements. The main conclusion is that teachers refer to the design product in an implicit way. They leave it to the students to discover the structure and components of the design process more or less by themselves.","Design process; generic elements; design education; design skills","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-09-15","","","Architectural Engineering","","",""
"uuid:5c55596f-2b0e-4b08-9d5f-28076760c639","http://resolver.tudelft.nl/uuid:5c55596f-2b0e-4b08-9d5f-28076760c639","Exploring the Fourth Order: Designing Organisational Infrastructure. In Research Perspectives in the Era of Transformations","Klitsie, J.B. (TU Delft Marketing and Consumer Research); Price, R.A. (TU Delft Marketing and Consumer Research); de Lille, C.S.H. (TU Delft Methodologie en Organisatie van Design; The Hague University of Applied Sciences)","Bohemia, E. (editor); Gemser, G. (editor); de Bont, C. (editor); Fain, N. (editor); Assoreira Almendra, R. (editor)","2019","Companies are organised to fulfil two distinctive functions: efficient and resilient exploitation of current business and parallel exploration of new possibilities. For the latter,
companies require strong organisational infrastructure such as team compositions and functional structures to ensure exploration remains effective. This paper explores the
potential for designing organisational infrastructure to be part of fourth order subject matter. In particular, it explores how organisational infrastructure could be designed in the
context of an exploratory unit, operating in a large heritage airline. This paper leverages insights from a long-term action research project and finds that building trust and shared
frames are crucial to designing infrastructure that affords the greater explorative agenda of an organisation.","corporate infrastructure; fourth order design; action research; design process; innovation","en","conference paper","","","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:8eb300dd-430a-4970-aa05-c3b6338431ee","http://resolver.tudelft.nl/uuid:8eb300dd-430a-4970-aa05-c3b6338431ee","Spatial variation pattern analysis of hydrologic processes and water quality in three gorges reservoir area","Chen, Xiaomin (Wuhan University); Xu, Gaohong (Changjiang Water Resources Commission); Zhang, Wanshun (Wuhan University); Peng, Hong (Wuhan University); Xia, Han (Wuhan University); Zhang, Xiao (Wuhan University); Ke, Q. (TU Delft Hydraulic Structures and Flood Risk); Wan, Jing (Wuhan University)","","2019","The Three Gorges Project (TGP) has greatly enhanced the heterogeneity of the underlying surface in the Three Gorges Reservoir Area (TGRA), thereby affecting the hydrologic processes and water quality. However, the influence of the differences of underlying surfaces on the hydrologic processes and water quality in the TGRA has not been studied thoroughly. In this research, the influence of the heterogeneity of landscape pattern and geographical characteristics on the spatial distribution difference of hydrologic processes and water quality in the different tributary basins of the TGRA was identified. The TGRA was divided into 23 tributary basins with 1840 sub-basins. The spatial differentiation of the hydrologic processes and water quality of the 23 tributary basins was examined by the Soil and Water Assessment Tool (SWAT). The observed data between 1 January 2010 and 31 December 2013 were used to calibrate and validate the model, after which the SWAT model was applied to further predict the runoffand water quality in the TGRA. There are 25 main model parameters, including CN2, CH_K2 and SOL_AWC, which were calibrated and validated with SWAT-Calibration and Uncertainty Procedures (SWAT-CUP). The landscape patterns and geomorphologic characteristics in 23 tributary basins were investigated and spatially visualized to correlate with surface runoffand nutrient losses. Due to geographical difference, the average total runoffdepth (2010-2013) in the left bank area (538.6 mm) was 1.4 times higher than that in the right bank area (384.5 mm), total nitrogen (TN) loads in the left bank area (6.23 kg/ha) were 1.9 times higher than in the right bank area (3.27 kg/ha), and total phosphorus (TP) loads in the left bank area (1.27 kg/ha) were 2.2 times higher than in the right bank area (0.58 kg/ha). The total runoffdepth decreased from the head region (553.3 mm) to the tail region (383.2 mm), while the loads of TN and TP were the highest in the middle region (5.51 kg/ha for TN, 1.15 kg/ha for TP), followed by the tail region (5.15 kg/ha for TN, 1.12 kg/ha for TP) and head region (3.92 kg/ha for TN, 0.56 kg/ha for TP). Owing to the different spatial distributions of land use, soil and geographical features in the TGRA, correlations between elevation, slope gradient, slope length and total runoffdepth, TN and TP, were not clear and no consistency was observed in each tributary basin. Therefore, the management and control schemes of the water security of the TGRA should be adapted to local conditions.","Geomorphologic characteristics; Hydrologic processes and water quality; Landscape pattern; Spatial variation; SWAT model; Three Gorges Reservoir Area","en","journal article","","","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:5345e6db-8da0-4343-bf6a-75a3705a3418","http://resolver.tudelft.nl/uuid:5345e6db-8da0-4343-bf6a-75a3705a3418","Hydrodynamic Limit of the Symmetric Exclusion Process on a Compact Riemannian Manifold","van Ginkel, G.J. (TU Delft Applied Probability); Redig, F.H.J. (TU Delft Applied Probability)","","2019","We consider the symmetric exclusion process on suitable random grids that approximate a compact Riemannian manifold. We prove that a class of random walks on these random grids converge to Brownian motion on the manifold. We then consider the empirical density field of the symmetric exclusion process and prove that it converges to the solution of the heat equation on the manifold.","Compact Riemannian manifold; Hydrodynamic limit; Random grids; Symmetric exclusion process","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:a5127699-2914-40b4-9a21-04a7dab274b8","http://resolver.tudelft.nl/uuid:a5127699-2914-40b4-9a21-04a7dab274b8","Towards an approach integrating various levels of data analytics to exploit product-usage information in product development","Klein, Patrick (University of Bremen); van der Vegte, Wilhelm Frederik (TU Delft Internet of Things); Hribernik, Karl (BIBA - Bremer Institut für Produktion und Logistik GmbH); Klaus-Dieter, Thoben (University of Bremen)","","2019","By applying data analytics to product usage information (PUI) from combinations of different channels, companies can get a more complete picture of their products' and services' Mid-Of-Life. All data, which is gathered within the usage phase of a product and which relates to a more comprehensive understanding of the usability of the product itself, can become valuable input. Nevertheless, an efficient use of such knowledge requires to setup related analysis capabilities enabling users not only to visualize relevant data, but providing development related knowledge e.g. to predict product behaviours not yet reflected by initial requirements. The paper elaborates on explorations to support product development with analytics to improve anticipation of future usage of products and related services. The discussed descriptive, predictive and prescriptive analytics in given research context share the idea and overarching process of getting knowledge out of PUI data. By implementation of corresponding features into an open software platform, the application of advanced analytics for white goods product development has been explored as a reference scenario for PUI exploitation.","Analytics; Design methods; Semantic data processing; Simulation; User centred design","en","journal article","","","","","","","","","","","Internet of Things","","",""
"uuid:6d0b7064-3df1-4481-97fe-340f8960f449","http://resolver.tudelft.nl/uuid:6d0b7064-3df1-4481-97fe-340f8960f449","A value-based definition of success in adaptive port planning: a case study of the Port of Isafjordur in Iceland","Eskafi, Majid (University of Iceland); Fazeli, Reza (University of Iceland); Dastgheib, Ali (IHE Delft Institute for Water Education); Taneja, P. (TU Delft Rivers, Ports, Waterways and Dredging Engineering); Ulfarsson, Gudmundur F. (University of Iceland); Thorarinsdottir, Ragnheidur I. (Agricultural University of Iceland); Stefansson, Gunnar (University of Iceland)","","2019","Multiple stakeholders with a wide range of objectives are engaged in a port system. Ports themselves are faced with many uncertainties in this volatile world. To meet stakeholder objectives and deal with uncertainties, adaptive port planning is increasingly being acknowledged. This method offers robust planning, and thereby, a sustainable and flexible port may be developed. The planning process starts with defining success in terms of the specific objectives of stakeholders during the projected lifetime of the port. In the present work, an integrated framework to reach a consensus on the definition of success, involving stakeholders with different influences, stakes and objectives, is presented. The framework synthesises the problem structuring method with stakeholder analysis and combines these with fuzzy logic to support decision-makers in formulating a definition of success in the planning process. Our framework is applied to the Port of Isafjordur, the third busiest port of call for cruise ships in Iceland. Values of stakeholders about port planning were structured around the value-focussed thinking method to identify stakeholder objectives. The highest level of agreement on the objectives, which is viewed here as success in port planning, was revealed by the fuzzy multi-attribute group decision-making method. Success was defined, prioritising an increase in competitiveness among other planning objectives, such as effective and efficient use of land, increasing safety and security, increasing hinterland connectivity, increasing financial performance, better environmental implications, flexibility creation and increasing positive economic and social impacts.","Adaptive port planning; Decision-making process; Definition of success; Iceland; Value-focussed thinking","en","journal article","","","","","","Accepted author manuscript","","2020-11-13","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:b22f1754-d42a-4016-b631-c51c803f486e","http://resolver.tudelft.nl/uuid:b22f1754-d42a-4016-b631-c51c803f486e","An Empirical Study into the Success of Listed Smart Contracts in Ethereum","Hartel, P.H. (TU Delft Cyber Security; Singapore University of Technology and Design); Homoliak, I. (Singapore University of Technology and Design; Brno University of Technology); Reijsbergen, Daniël (Singapore University of Technology and Design)","","2019","Since it takes time and effort to put a new product or service on the market, one would like to predict whether it will be a success. In general this is not possible, but it is possible to follow best practices in order to maximize the chance of success. A smart contract is intended to encode business logic and is therefore at the heart of every new business on the Ethereum blockchain. We have investigated how to measure the success of smart contracts, and whether successful smart contracts have characteristics that less successful smart contracts lack. The appearance of a smart contract on a listing website such as Etherscan or StateoftheDapps is such a characteristic. In this paper, we present a three-pronged analysis of the relative success of listed smart contracts. First, we have used statistical analysis on the publicly visible transaction history of the Ethereum blockchain to determine that listed contracts are significantly more successful than their unlisted counterparts. Next, we have conducted a survey among more than 200 developers via an anonymous online survey about their experience with the listing process. A significant majority of respondents do not believe that listing a contract itself contributes to its success, but they believe that the extra attention that is typically paid in tandem with the listing process does contribute. Finally, based on the respondents' answers, we have drafted 10 recommendations for developers and validated them by submitting them to an international panel of experts.","blockchain; business success; computers and information processing; ethereum; new product development; product development; recommendations for developers; smart contracts; social implications of technology; Software engineering; technology social factors","en","journal article","","","","","","","","","","","Cyber Security","","",""
"uuid:72757c08-a7e9-4fcc-9795-18de82c2631f","http://resolver.tudelft.nl/uuid:72757c08-a7e9-4fcc-9795-18de82c2631f","Space-time topology optimization for additive manufacturing: Concurrent optimization of structural layout and fabrication sequence","Wang, W. (TU Delft Materials and Manufacturing; Dalian University of Technology); Munro, D.P. (TU Delft Computational Design and Mechanics); Wang, C.C. (TU Delft Materials and Manufacturing); van Keulen, A. (TU Delft Computational Design and Mechanics); Wu, J. (TU Delft Materials and Manufacturing)","","2019","The design of optimal structures and the planning of (additive manufacturing) fabrication sequences have been considered typically as two separate tasks that are performed consecutively. In the light of recent advances in robot-assisted (wire-arc) additive manufacturing which enable addition of material along curved surfaces, we present a novel topology optimization formulation which concurrently optimizes the structure and the fabrication sequence. For this, two sets of design variables, i.e., a density field for defining the structural layout, and a time field which determines the fabrication process order, are simultaneously optimized. These two fields allow to generate a sequence of intermediate structures, upon which manufacturing constraints (e.g., fabrication continuity and speed) are imposed. The proposed space-time formulation is general, and is demonstrated on three fabrication settings, considering self-weight of the intermediate structures, process-dependent critical loads, and time-dependent material properties.","Additive manufacturing; Manufacturing process planning; Space-time optimization; Topology optimization","en","journal article","","","","","","","","","","","Materials and Manufacturing","","",""
"uuid:433f306e-ed7e-43b9-b58e-b5692d7bb6d7","http://resolver.tudelft.nl/uuid:433f306e-ed7e-43b9-b58e-b5692d7bb6d7","Decompounding discrete distributions: A nonparametric Bayesian approach","Gugushvili, Shota (Wageningen University & Research); Mariucci, Ester (University of Potsdam); van der Meulen, F.H. (TU Delft Statistics)","","2019","Suppose that a compound Poisson process is observed discretely in time and assume that its jump distribution is supported on the set of natural numbers. In this paper we propose a nonparametric Bayesian approach to estimate the intensity of the underlying Poisson process and the distribution of the jumps. We provide a Markov chain Monte Carlo scheme for obtaining samples from the posterior. We apply our method on both simulated and real data examples, and compare its performance with the frequentist plug-in estimator proposed by Buchmann and Grübel. On a theoretical side, we study the posterior from the frequentist point of view and prove that as the sample size n→∞, it contracts around the “true,” data-generating parameters at rate 1/√n, up to a n factor.","compound Poisson process; data augmentation; diophantine equation; Gibbs sampler; Metropolis-Hastings algorithm; Nonparametric Bayesian estimation","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:dbec3cff-fc13-4110-9854-81811a68b5d7","http://resolver.tudelft.nl/uuid:dbec3cff-fc13-4110-9854-81811a68b5d7","Characterization of depolarizing channels using two-photon interference","Castro do Amaral, G. (TU Delft QID/Tittel Lab; TU Delft QuTech Advanced Research Centre; Kavli institute of nanoscience Delft); Temporão, G. P. (PUC-Rio)","","2019","Depolarization is one of the most important sources of error in a quantum communication link that can be introduced by the quantum channel. Even though standard quantum process tomography can, in theory, be applied to characterize this effect, in most real-world implementations depolarization cannot be distinguished from time-varying unitary transformations, especially when the timescales are much shorter than the detectors response time. In this paper, we introduce a method for distinguishing true depolarization from fast polarization rotations by employing Hong–Ou–Mandel interference. It is shown that the results are independent of the timing resolutions of the photodetectors.","Quantum communication; Quantum process tomography; Two-photon interference","en","journal article","","","","","","","","","","","QID/Tittel Lab","","",""
"uuid:0f233b9e-6322-4b2a-8711-433a2966d02c","http://resolver.tudelft.nl/uuid:0f233b9e-6322-4b2a-8711-433a2966d02c","Convolutional Graph Neural Networks","Gama, Fernando (University of Pennsylvania); Marques, Antonio G. (Universidad Rey Juan Carlos); Leus, G.J.T. (TU Delft Signal Processing Systems); Ribeiro, Alejandro (University of Pennsylvania)","Matthews, Michael B. (editor)","2019","Convolutional neural networks (CNNs) restrict the, otherwise arbitrary, linear operation of neural networks to be a convolution with a bank of learned filters. This makes them suitable for learning tasks based on data that exhibit the regular structure of time signals and images. The use of convolutions, however, makes them unsuitable for processing data that do not exhibit such a regular structure. Graph signal processing (GSP) has emerged as a powerful alternative to process signals whose irregular structure can be described by a graph. Central to GSP is the notion of graph convolutional filters which can be used to define convolutional graph neural networks (GNNs). In this paper, we show that the graph convolution can be interpreted as either a diffusion or aggregation operation. When combined with nonlinear processing, these different interpretations lead to different generalizations which we term selection and aggregation GNNs. The selection GNN relies on linear combinations of signal diffusions at different resolutions combined with node-wise non-linearities. The aggregation GNN relies on linear combinations of neighborhood averages of different depth. Instead of node-wise nonlinearities, the nonlinearity in aggregation GNNs is pointwise on the different aggregation levels. Both of these models particularize to regular CNNs when applied to time signals but are different when applied to arbitrary graphs. Numerical evaluations show different levels of performance for selection and aggregation GNNs.","graph convolutions; graph neural networks; graph signal processing; network data","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-02-01","","","Signal Processing Systems","","",""
"uuid:9504cc3d-f462-43ae-bf3b-2d536be8b20c","http://resolver.tudelft.nl/uuid:9504cc3d-f462-43ae-bf3b-2d536be8b20c","Método alternativo para mejorar los modelos de campo gravitacional al incorporar información del satélite explorador de la circulación oceánica y de gravedad","Wan, Xiaoyun (China University of Geosciences); Ran, J. (TU Delft Physical and Space Geodesy)","","2019","The aim of this paper is to present an alternative method that can be used to improve existing gravity field models via the application of gradient data from Gravity field and Ocean Circulation Explorer (GOCE). First, the proposed algorithm used to construct the observation equation is presented. Then methods for noise processing in both time and space domains aimed at reducing noises are introduced. As an example, the European Improved Gravity model of the Earth by New techniques (EIGEN5C) is modified with gradient observations over the whole lifetime of the GOCE, leading to a new gravity field model named as EGMGOCE (Earth Gravitational Model of GOCE). The results show that the cumulative geoid difference between EGMGOCE and EGM08 is reduced by 4 centimeters compared with that between EIGEN5C and Earth Gravitational Model 2008 (EGM08) up to 200 degrees. The large geoid differences between EGMGOCE and EIGEN5C mainly exist in Africa, South America, Antarctica and Himalaya, which indicates the contribution from GOCE. Compared to the newest GOCE gravity field model resolved by direct method from European Space Agency (ESA), the cumulative geoid difference is reduced by 7 centimeters up to 200 degrees.","Gravity gradients; Model modification; Noise processing; Radial gravity gradient","es","journal article","","","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:a3b89c57-6b77-4d61-aa3c-91027510f202","http://resolver.tudelft.nl/uuid:a3b89c57-6b77-4d61-aa3c-91027510f202","Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering for Discontinuous Galerkin Solutions over Nonuniform Meshes: Superconvergence and Optimal Accuracy","Li, Xiaozhou (University of Electronic Science and Technology of China); Ryan, J.K. (University of East Anglia; Heinrich-Heine-Universität); Kirby, Robert M. (University of Utah); Vuik, Cornelis (TU Delft Numerical Analysis)","","2019","Smoothness-increasing accuracy-conserving (SIAC) filtering is an area of increasing interest because it can extract the “hidden accuracy” in discontinuous Galerkin (DG) solutions. It has been shown that by applying a SIAC filter to a DG solution, the accuracy order of the DG solution improves from order k+ 1 to order 2 k+ 1 for linear hyperbolic equations over uniform meshes. However, applying a SIAC filter over nonuniform meshes is difficult, and the quality of filtered solutions is usually unsatisfactory applied to approximations defined on nonuniform meshes. The applicability to such approximations over nonuniform meshes is the biggest obstacle to the development of a SIAC filter. The purpose of this paper is twofold: to study the connection between the error of the filtered solution and the nonuniform mesh and to develop a filter scaling that approximates the optimal error reduction. First, through analyzing the error estimates for SIAC filtering, we computationally establish for the first time a relation between the filtered solutions and the unstructuredness of nonuniform meshes. Further, we demonstrate that there exists an optimal accuracy of the filtered solution for a given nonuniform mesh and that it is possible to obtain this optimal accuracy by the method we propose, an optimal filter scaling. By applying the newly designed filter scaling over nonuniform meshes, the filtered solution has demonstrated improvement in accuracy order as well as reducing the error compared to the original DG solution. Finally, we apply the proposed methods over a large number of nonuniform meshes and compare the performance with existing methods to demonstrate the superiority of our method.","Discontinuous Galerkin method; Nonuniform meshes; Post-processing; SIAC filtering; Superconvergence; Unstructuredness","en","journal article","","","","","","green","","","","","Numerical Analysis","","",""
"uuid:322ea5ca-5c4f-4c74-8dc2-1ec9f222a46c","http://resolver.tudelft.nl/uuid:322ea5ca-5c4f-4c74-8dc2-1ec9f222a46c","Energy Optimization for Large-Scale 3D Manycores in the Dark-Silicon Era","Majzoub, Sohaib (University of Sharjah, Sharjah); Saleh, Resve A. (University of British Columbia); Ashraf, I. (TU Delft Computer Engineering); Taouil, M. (TU Delft Computer Engineering); Hamdioui, S. (TU Delft Computer Engineering)","","2019","In this paper, we study the impact of the idle/dynamic power consumption ratio on the effectiveness of a multi-V dd /frequency manycore design. We propose a new tool called LVSiM (a Low-Power and Variation-Aware Manycore Simulator) to carry out the experiments. It is a novel manycore simulator targeted towards low-power optimization methods including within-die process and workload variations. LVSiM provides a holistic platform for multi-V dd /frequency voltage island analysis, optimization, and design. It provides a tool for the early design exploration stage to analyze large-scale manycores with a given number of cores on 3D-stacked layers, network-on-chip communication busses, technology parameters, voltage and frequency values, and power grid parameters, using a variety of different optimization methods. LVSiM has been calibrated with Sniper/McPAT at a nominal frequency, and then the energy-delay-product (EDP) numbers were compared after frequency scaling. The average error is shown to be 10% after frequency scaling, which is sufficient for our purposes. The experiments in this work are carried out for different Idle/Dynamic ratios considering 1260 benchmarks with task sizes ranging from 4000 to 16 000 executing on 3200 cores. The best configurations are shown to produce on average 20.7% to 24.6% EDP savings compared to the nominal configuration. Traditional scheduling methods are used in the nominal configuration with the unused cores switched off. In addition, we show that, as the Idle/Dynamic ratio increases, the multi-V dd /frequency approach becomes less effective. In the case of a high Idle/Dynamic ratio, the minimum EDP can be achieved through switching off unused cores as opposed to using a multi-V dd /frequency approach. This conclusion is important, especially in the dark-silicon era, where switching cores on and/or off as needed is a common practice.","3D-stacked chip; dark-silicon; dynamic power; energy-delay-product; frequency scaling; idle power; low-power design; manycore; multicore; process variation; simulator; voltage scaling; voltage selection; within-die variation","en","journal article","","","","","","","","","","","Computer Engineering","","",""
"uuid:b6e0b30d-bd26-4db3-bbb4-72be31e5f72b","http://resolver.tudelft.nl/uuid:b6e0b30d-bd26-4db3-bbb4-72be31e5f72b","A contactless measuring speed system of belt conveyor based on machine vision and machine learning","Gao, Yuan (Taiyuan University of Technology); Qiao, Tiezhu (Taiyuan University of Technology); Zhang, Haitao (Taiyuan University of Technology); Yang, Yi (Taiyuan University of Technology); Pang, Y. (TU Delft Transport Engineering and Logistics); Wei, Hongyan (Taiyuan University of Technology)","","2019","During the operation of the belt conveyor, measuring speed of the belt conveyor is vital to the safe and efficient operation. In the existing measuring speed system, the measurement instrument is required contacting with the surface of the belt. The contact measurement method cannot avoid the occurrence of measuring error caused by slipping on the contact surface and wear of the measurement instrument. In order to solve the problems mentioned above, a new contactless measuring speed system is proposed in this paper. The system uses the CCD camera to capture the side image of belt. The speed of belt conveyor can be obtained by measuring the regularity of image texture. The proposed measuring system can meet the requirement of measuring speed in long running process of belt conveyor. Experimental results show that the measuring accuracy indicators can reach RMSE of 0.018 m/s and MAE of 0.010 m/s.","Belt conveyor; Contactless measuring speed; Image processing; Polynomial linear regression","en","journal article","","","","","","Accepted Author Manuscript","","2021-03-15","","","Transport Engineering and Logistics","","",""
"uuid:a7c0d871-4c79-4147-8335-885797b98fbf","http://resolver.tudelft.nl/uuid:a7c0d871-4c79-4147-8335-885797b98fbf","Convolutional Neural Network Architectures for Signals Supported on Graphs","Gama, F. (University of Pennsylvania); Marques, Antonio G. (Universidad Rey Juan Carlos); Leus, G.J.T. (TU Delft Signal Processing Systems); Ribeiro, Alejandro (University of Pennsylvania)","","2019","Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.","convolutional neural networks; Deep learning; graph filters; graph signal processing; pooling","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-08-15","","","Signal Processing Systems","","",""
"uuid:65a5f5d2-a864-49b0-b9b2-190f8f7b545e","http://resolver.tudelft.nl/uuid:65a5f5d2-a864-49b0-b9b2-190f8f7b545e","A numerical Bayesian-calibrated characterization method for multiscale prepreg preforming simulations with tension-shear coupling","Zhang, Weizhao (Northwestern University); Bostanabad, Ramin (Northwestern University); Liang, Biao (Northwestern University); Su, Xuming (Ford Motor Company); Zeng, Danielle (Ford Motor Company); Bessa, M.A. (TU Delft (OLD) MSE-5); Wang, Yanchao (Tongji University); Chen, Wei (Northwestern University); Cao, Jian (Northwestern University)","","2019","Carbon fiber reinforced plastics (CFRPs) are attracting growing attention in industry because of their enhanced properties. Preforming of thermoset carbon fiber prepregs is one of the most common production techniques of CFRPs. To simulate preforming, several computational methods have been developed. Most of these methods, however, obtain the material properties directly from experiments such as uniaxial tension and bias-extension where the coupling effect between tension and shear is not considered. Neglecting this coupling effect deteriorates the prediction accuracy of simulations. To address this issue, we develop a Bayesian model calibration and material characterization approach in a multiscale finite element preforming simulation framework that utilizes mesoscopic representative volume element (RVE) to account for the tension-shear coupling. A new geometric modeling technique is first proposed to generate the RVE corresponding to the close-packed uncured prepreg. This RVE model is then calibrated with a modular Bayesian approach to estimate the yarn properties, test its potential biases against the experiments, and fit a stress emulator. The predictive capability of this multiscale approach is further demonstrated by employing the stress emulator in the macroscale preforming simulation which shows that this approach can provide accurate predictions.","Bayesian calibration; Gaussian processes; Multiscale simulations; Preforming; Prepreg","en","journal article","","","","","","Accepted Author Manuscript","","2019-11-22","","","(OLD) MSE-5","","",""
"uuid:d3a82723-999a-41d4-8a21-79f8fb6ab1b7","http://resolver.tudelft.nl/uuid:d3a82723-999a-41d4-8a21-79f8fb6ab1b7","Attributing the hydrological impact of different land use types and their long-term dynamics through combining parsimonious hydrological modelling, alteration analysis and PLSR analysis","Gebremicael, T.G. (TU Delft Water Resources; Tigray Agricultural Research Institute; IHE Delft Institute for Water Education); Abbas Mohamedali, Y. (TU Delft Water Resources; IHE Delft Institute for Water Education; Tigray Agricultural Research Institute); van der Zaag, P. (TU Delft Water Resources; IHE Delft Institute for Water Education)","","2019","Understanding the relationship between hydrological processes and environmental changes is important for improved water management. The Geba catchment in Ethiopia, forming the headwaters of Tekeze-Atbara basin, was known for its severe land degradation before the recent success in integrated watershed management. This study analyses the hydrological response attributed to land management change using an integrated approach composed of (i) simulating the hydrological response of Land Use/Cover (LULC) changes; (ii) assessing the alteration of streamflow using Alteration of Hydrological Indicators (IHA); and (iii) quantifying the contribution of individual LULC types to the hydrology using Partial Least Square Regression model (PLSR). The results show that the expansion of agricultural and grazing land at the expense of natural vegetation has increased the surface runoff 77% and decreased dry season flow by 30% in the 1990s compared to 1970s. However, natural vegetation started to recover from the late 1990s and dry season flows increased by 16%, while surface runoff declined by 19%. More pronounced changes of the streamflow were noticed at sub-catchment level, mainly associated with the uneven spatial distribution of land degradation and rehabilitation. However, the rate of increase of low-flow halted in the 2010s, most probably due to an increase of water withdrawals for irrigation. Fluctuations in hydrological alteration parameters are in agreement with the observed LULC change. The PLSR analysis demonstrates that most LULC types showed a strong association with all hydrological components. These findings demonstrate that changing water conditions are attributed to the observed LULC change dynamics. The combined analysis of rainfall-runoff modelling, alteration indicators and PLSR is able to assess the impact of environmental change on the hydrology of complex catchments. The IHA tool is robust to assess the magnitude of streamflow alterations obtained from the hydrological model while the PLSR method is useful to zoom into which LULC is responsible for this alteration.","Geba catchment; Hydrological processes; IHA analysis; Land use/cover; PSLR analysis; Wflow model","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:1805e82e-4b17-42c7-9c1a-9df35be6d38d","http://resolver.tudelft.nl/uuid:1805e82e-4b17-42c7-9c1a-9df35be6d38d","Why listening in background noise is harder in a non-native language than in a native language: A review","Scharenborg, O.E. (TU Delft Multimedia Computing; Radboud Universiteit Nijmegen); van Os, Marjolein (Saarland University)","","2019","There is ample evidence that recognising words in a non-native language is more difficult than in a native language, even for those with a high proficiency in the non-native language involved, and particularly in the presence of background noise. Why is this the case? To answer this question, this paper provides a systematic review of the literature on non-native spoken-word recognition in the presence of background noise, and posits an updated theory on the effect of background noise on native and non-native spoken-word recognition. The picture that arises is that although spoken-word recognition in the presence of background noise is harder in a non-native language than in one's native language, this difference is not due to a differential effect of background noise on native and non-native listening. Rather, it can be explained by differences in language exposure, which influences the uptake and use of phonetic and contextual information in the speech signal for spoken-word recognition.","Background noise; Cognitive processes; Non-native; Spoken-word recognition","en","review","","","","","","Accepted author manuscript","","2021-03-16","","","Multimedia Computing","","",""
"uuid:87d36c7d-1a94-44d7-8703-bb1868176df6","http://resolver.tudelft.nl/uuid:87d36c7d-1a94-44d7-8703-bb1868176df6","Compensation for Process and Temperature Dependency in a CMOS Image Sensor","Xie, S. (TU Delft Electronic Instrumentation); Theuwissen, A.J.P.A.M. (TU Delft Electronic Instrumentation; Harvest Imaging)","","2019","This paper analyzes and compensates for process and temperature dependency among a (Complementary Metal Oxide Semiconductor) CMOS image sensor (CIS) array. Both the analysis and compensation are supported with experimental results on the CIS’s dark current, dark signal non-uniformity (DSNU), and conversion gain (CG). To model and to compensate for process variations, process sensors based on pixel source follower (SF)’s transconductance g m,SF have been proposed to model and to be compared against the measurement results of SF gain A SF . In addition, A SF ’s thermal dependency has been analyzed in detail. To provide thermal information required for temperature compensation, six scattered bipolar junction transistor (BJT)-based temperature sensors replace six image pixels inside the array. They are measured to have an untrimmed inaccuracy within ±0.5 ⁰C. Dark signal and CG’s thermal dependencies are compensated using the on-chip temperature sensors by at least 79% and 87%, respectively.","CMOS image sensor (CIS); Conversion gain (CG); Dark current; Dark signal non-uniformity (DSNU); Delta-sigma (Δ-σ) modulator; Process variability; Process variations; Temperature sensors; Thermal compensation","en","journal article","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:add3cb34-1a6d-4271-8502-e879608878ad","http://resolver.tudelft.nl/uuid:add3cb34-1a6d-4271-8502-e879608878ad","CNN architectures for GRAPH data","Gama, F. (University of Pennsylvania); Marques, Antonio G. (King Juan Carlos University); Leus, G.J.T. (TU Delft Signal Processing Systems); Ribeiro, Alejandro (University of Pennsylvania)","","2019","In this ongoing work, we describe several architectures that generalize convolutional neural networks (CNNs) to process signals supported on graphs. The general idea of the replace time invariant filters with graph filters to generate convolutional features and to replace pooling with sampling schemes for graph signals. The different architectures are compared and the key trade offs are identified. Numerical simulations with both synthetic and real-world data are used to illustrate the advantages of the proposed approaches.","Convolutional neural networks; Deep learning; Geometric learning; Graph signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-08-21","","","Signal Processing Systems","","",""
"uuid:52f69988-4064-4564-ada6-9e2c95e5c50d","http://resolver.tudelft.nl/uuid:52f69988-4064-4564-ada6-9e2c95e5c50d","Advances in Distributed Graph Filtering","Coutino, Mario (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2019","Graph filters are one of the core tools in graph signal processing. A central aspect of them is their direct distributed implementation. However, the filtering performance is often traded with distributed communication and computational savings. To improve this tradeoff, this paper generalizes state-of-the-art distributed graph filters to filters where every node weights the signal of its neighbors with different values while keeping the aggregation operation linear. This new implementation, labeled as edge-variant graph filter, yields a significant reduction in terms of communication rounds while preserving the approximation accuracy. In addition, we characterize a subset of shift-invariant graph filters that can be described with edge-variant recursions. By using a low-dimensional parameterization, these shift-invariant filters provide new insights in approximating linear graph spectral operators through the succession and composition of local operators, i.e., fixed support matrices. A set of numerical results shows the benefits of the edge-variant graph filters over current methods and illustrates their potential to a wider range of applications than graph filtering.","ARMA; Consensus; distributed beamforming; distributed signal processing; edge-variant graph filters; FIR; graph filters; graph signal processing; IIR","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-11-01","","","Signal Processing Systems","","",""
"uuid:084b065d-495b-441a-a806-abfc2ea8f27c","http://resolver.tudelft.nl/uuid:084b065d-495b-441a-a806-abfc2ea8f27c","A Defect Classication Methodology for Sewer Image Sets with Convolutional Neural Networks","Meijer, D.W.J. (TU Delft Sanitary Engineering; Universiteit Leiden); Scholten, L. (TU Delft Sanitary Engineering); Clemens, F.H.L.R. (TU Delft Sanitary Engineering; Deltares); Knobbe, Arno (Universiteit Leiden)","","2019","Sewer pipes are commonly inspected in situ with CCTV equipment. The CCTV footage is then reviewed by human operators in order to classify defects in the pipes and make a recommendation on possible interventions. This process is both labor-intensive and error-prone. Other researchers have suggested machine learning techniques to (partially) automate the human review of this footage, but the automated classifiers are often validated in artificial testing setups, leading to biased results that do not translate directly to operational impact. In this work, we discuss suitable evaluation metrics for this specific classification task — most notably ‘specificity at sensitivity’ and ‘precision at recall’ — and the importance of using a validation setup that includes a realistic ratio of images with defects to images without defects, and a sufficiently large dataset. We also introduce ‘leave-two-inspections-out’ cross validation, designed to eliminate a data leakage bias that would otherwise cause an overestimation of classifier performance. We designed a convolutional neural network (CNN) and applied this validation methodology to automatically detect the twelve most common defect types in a dataset of over 2 million CCTV images. With this dataset and our validation methodology, our CNN outperforms the state-of-the-art. Classification performance was highest for intruding and defective connections and lowest for porous pipes. While the CNN is not capable of fully automated classification at sufficient performance levels, we determined that if we augment the human operator with the CNN, this may reduce the required human labor by up to 60.5%.","Automated classification; CCTV inspection; Classifier validation; Convolutional neural networks; Image processing; Sewer asset management","en","journal article","","","","","","Accepted Author Manuscript","","2021-05-03","","","Sanitary Engineering","","",""
"uuid:19353232-0bdb-4cb0-b633-3c6e23edcfde","http://resolver.tudelft.nl/uuid:19353232-0bdb-4cb0-b633-3c6e23edcfde","Evaluation of innovative ideas for Public Transport proposed by citizens using Multi-Criteria Decision Analysis (MCDA)","Nalmpantis, Dimitrios (Aristotle University of Thessaloniki); Roukouni, A. (TU Delft Policy Analysis; Aristotle University of Thessaloniki); Genitsaris, Evangelos (Aristotle University of Thessaloniki); Stamelou, Afroditi (Aristotle University of Thessaloniki; Hellenic Institute of Transport); Naniopoulos, Aristotelis (Aristotle University of Thessaloniki)","","2019","Introduction: The use of participatory techniques in the field of transport is coming to the forefront recently. In this frame, eight co-creation workshops and five online crowdsourcing campaigns took place in Thessaloniki, Southern Tuscany, Rotterdam/The Hague, and Frankfurt, from which many innovative ideas to enhance Public Transport were generated by citizens. Purpose: A simple list of innovations would not be very useful for Public Transport Operators, as they cannot implement all of them at once. There was an obvious need for their ranking and this is the purpose of this paper. Methods: The ranking was realized with the most used Multi-Criteria Decision Analysis method in transportation research, i.e. the Analytic Hierarchy Process, using three criteria: Feasibility, Utility, and Innovativeness. An online questionnaire was distributed to experts, using a modified snowball sampling technique, which yielded 97 completed questionnaires. Results: Utility (42.90%) was found to be the most important criterion, followed by Feasibility (40.10%), and Innovativeness (17.00%). Four lists of innovations were derived, ranked with respect to a) all three examined criteria, b) Feasibility, c) Utility, and d) Innovativeness. The highest ranked innovation for a) and c) was found to be Mobility as a Service and platform with real-time travel, comfort, and multi-modal information; for b) City marketing from a Public Transport perspective; and for d) Advanced e-ticketing system. Conclusion: The results revealed which of the innovations are the most promising and provide valuable insight into how to integrate innovation with Public Transport to make it more attractive. Public Transport Operators may use the results according to the peculiarities of their city and the importance they give to Feasibility, Utility, and Innovativeness.","Analytic Hierarchy Process (AHP); Co-creation; Collective intelligence; Evaluation; Innovation; Multi-Criteria Decision Analysis (MCDA); Participatory techniques; Public transport","en","journal article","","","","","","","","","","","Policy Analysis","","",""
"uuid:d9e15987-8b79-44bc-a47a-223258920496","http://resolver.tudelft.nl/uuid:d9e15987-8b79-44bc-a47a-223258920496","Ultrasonic synthetic-aperture interface imaging","van der Neut, J.R. (TU Delft ImPhys/Acoustical Wavefield Imaging); Fokkema, J.T. (TU Delft ImPhys/Acoustical Wavefield Imaging); van den Berg, P.M. (TU Delft ImPhys/Acoustical Wavefield Imaging); Zapf, Michael (Karlsruhe Institut für Technologie); Ruiter, Nicole V. (Karlsruhe Institut für Technologie); Taskin, U. (TU Delft ImPhys/Acoustical Wavefield Imaging); van Dongen, K.W.A. (TU Delft ImPhys/Acoustical Wavefield Imaging)","","2019","Synthetic-aperture (SA) imaging is a popular method to visualize the reflectivity of an object from ultrasonic reflections. The method yields an image of the (volume) contrast in acoustic impedance with respect to the embedding. Typically, constant mass density is assumed in the underlying derivation. Due to the band-limited nature of the recorded data, the image is blurred in space, which is quantified by the associated point spread function. SA volume imaging is valid under the Born approximation, where it is assumed that the contrast is weak. When objects are large with respect to the wavelength, it is questionable whether SA volume imaging should be the method-of-choice. Herein, we propose an alternative solution that we refer to as SA interface imaging. This approach yields a vector image of the discontinuities of acoustic impedance at the tissue interfaces. Constant wave speed is assumed in the underlying derivation. The image is blurred in space by a tensor, which we refer to as the interface spread function. SA interface imaging is valid under the Kirchhoff approximation, where it is assumed that the wavelength is small compared to the spatial dimensions of the interfaces. We compare the performance of volume and interface imaging on synthetic data and on experimental data of a gelatin cylinder with a radius of 75 wavelengths, submerged in water. As expected, the interface image peaks at the gelatin-water interface, while the volume image exposes a peak and trough on opposing sides of the interface.","Acoustic signal processing; image representation; ultrasonic imaging","en","journal article","","","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:c98e848f-f865-4b52-89cb-3a9422ca9239","http://resolver.tudelft.nl/uuid:c98e848f-f865-4b52-89cb-3a9422ca9239","Co-designing with people with dementia: A scoping review of involving people with dementia in design research","Wang, G. (TU Delft Applied Ergonomics and Design); Marradia, Chiara (Student TU Delft); Albayrak, A. (TU Delft Applied Ergonomics and Design); van der Cammen, T.J.M. (TU Delft Applied Ergonomics and Design; Erasmus MC)","","2019","Co-designing with people with dementia (PwD) can uncover their needs and preferences, which have been often overlooked. It is difficult for PwD to understand designers and express themselves in a conventional co-design session. This study aims to evaluate the effects of involving PwD in design research on both PwD and the design process; to identify the trends of involving PwD in design research; to extract tools, recommendations, and limitations of involving PwD from reviewed studies to update the recommendations on how to co-design with PwD. A scoping review was carried out within the electronic databases PubMed and Scopus, and eight research questions were proposed, in order to gain specific knowledge on the involvement of PwD in design research. Twenty-six studies met the inclusion criteria, and 32 sessions were evaluated. Beneficial effects on both PwD and the design process were reported. The number of studies involving PwD in the moderate and severe stages of dementia has increased. Based on the review, an update of the existing tools and recommendations for co-designing with PwD is provided and a list of limitations of involving PwD is presented. The review shows that involving PwD in design research is beneficial for both the PwD and the design process, and there is a shift towards involving people who are in the moderate and severe stages of dementia. The authors propose that multidisciplinary meetings and case studies should be carried out to evaluate and refine the list of tools and recommendations as well as the list of limitations generated in this review.","Co-design; Dementia stage; Design process; Design research; Recommendations; Scoping review","en","review","","","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:f2a1a715-ede8-4f9c-9a57-568d16b3942d","http://resolver.tudelft.nl/uuid:f2a1a715-ede8-4f9c-9a57-568d16b3942d","Sparse Sampling for Inverse Problems with Tensors","Ortiz-Jimenez, Guillermo (Swiss Federal Institute of Technology; Student TU Delft); Coutino, Mario (TU Delft Signal Processing Systems); Chepuri, S.P. (TU Delft Signal Processing Systems; Indian Institute of Science); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2019","We consider the problem of designing sparse sampling strategies for multidomain signals, which can be represented using tensors that admit a known multilinear decomposition. We leverage the multidomain structure of tensor signals and propose to acquire samples using a Kronecker-structured sensing function, thereby circumventing the curse of dimensionality. For designing such sensing functions, we develop low-complexity greedy algorithms based on submodular optimization methods to compute near-optimal sampling sets. We present several numerical examples, ranging from multiantenna communications to graph signal processing, to validate the developed theory.","Graph signal processing; multidimensional sampling; sparse sampling; submodular optimization; tensors","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-11-15","","","Signal Processing Systems","","",""
"uuid:ea33e30d-058e-4690-836e-630ab22e5376","http://resolver.tudelft.nl/uuid:ea33e30d-058e-4690-836e-630ab22e5376","An Energy-Efficient Multi-Sensor Compressed Sensing System Employing Time-Mode Signal Processing Techniques","Akgün, O.C. (TU Delft Bio-Electronics); Mangia, Mauro (University of Bologna); Pareschi, Fabio (Politecnico di Torino); Rovatti, Riccardo (University of Bologna); Setti, Gianluca (Politecnico di Torino); Serdijn, W.A. (TU Delft Bio-Electronics)","","2019","This paper presents the design of an ultra-low energy, rakeness-based compressed sensing (CS) system that utilizes time-mode (TM) signal processing (TMSP). To realize TM CS operation, the presented implementation makes use of monostable multivibrator based analog-to-time converters, fixed-width pulse generators, basic digital gates and an asynchronous time-to-digital converter. The TM CS system was designed in a standard 0.18 µm IC process and operates from a supply voltage of 0.6V. The system is designed to accommodate data from 128 individual sensors and outputs 9-bit digital words with an average reconstruction SNR of 35.31 dB, a compression ratio of 3.2, with an energy dissipation per channel per measurement vector of 0.621 pJ at a rate of 2.23 k measurement vectors per second.","Compressed sensing; Energy efficiency; Rakeness; Time-mode; Time-mode signal processing; Ultra-low energy","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Bio-Electronics","","",""
"uuid:00741788-7a56-4de7-aa87-633e4f70c59a","http://resolver.tudelft.nl/uuid:00741788-7a56-4de7-aa87-633e4f70c59a","Interaction effect of background sound type and sound pressure level on children of primary schools in the Netherlands","Zhang, D. (TU Delft Indoor Environment); Tenpierik, M.J. (TU Delft Building Physics); Bluyssen, P.M. (TU Delft Indoor Environment)","","2019","The acoustic conditions of classrooms received a lot of attention in the last decades because of its important role in school children's comfort and performance. In a previous field study of 54 classrooms from 21 schools in the Netherlands, more than 85% of the 1145 primary school children reported that they were bothered by noise in the classroom. The objective of this study is to identify the effect of background sounds on children's performance, sound evaluation and influence assessment based on a lab study conducted in the SenseLab. 335 school children (9 to 13 years old)from the previous studied schools participated in the lab study. They were subjected to a series of listening tests and evaluations in two acoustic test chambers (acoustically treated or untreated)with one of seven randomly played background sounds: 45 dB(A)or 60 dB(A)traffic noise, 45 dB(A)or 60 dB(A)children talking, 45 dB(A)or 60 dB(A)music, or no sound (≈30 dB(A)). A two-way ANOVA was applied to analyse the interaction effect of sound type and sound pressure level (SPL)on children's performance, sound evaluation and influence assessment in each of the chambers. Statistically significant interactions between the impact of sound type and SPL on children's phonological processing performance and their influence assessments were found in the untreated chamber.","Interaction effect; Music; Noise; Phonological processing; Primary school children; Sound pressure level","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-11-09","","","Indoor Environment","","",""
"uuid:8d572871-4ead-4db1-a7fc-99fcca71ba46","http://resolver.tudelft.nl/uuid:8d572871-4ead-4db1-a7fc-99fcca71ba46","Model-based optimization of integrated purification sequences for biopharmaceuticals","Pirrung, S.M. (TU Delft BT/Bioprocess Engineering); Berends, Carmen (Student TU Delft); Backx, Antoon H. (Student TU Delft); van Beckhoven, Ruud F.W.C. (DSM); Eppink, Michel H.M. (Synthon Biopharmaceuticals B.V.); Ottens, M. (TU Delft BT/Bioprocess Engineering)","","2019","Finding the best purification process is a challenging task. Recently, mechanistic models that can accelerate the development of chromatographic unit operations, the most important purification units, became widely available. In previous work, several chromatographic models have been linked together to simulate and optimize integrated processes. However, considering only chromatographic steps may lead to a suboptimal process. Consequently, the aim of this study was to include models for ultra- and diafiltration units into the optimization approach to account for buffer exchange steps before or between chromatography units. This approach was applied to an industrial case, the purification of a monoclonal antibody, where cation exchange, hydrophobic interaction and mixed mode were the possible chromatographic separation modes. It turned out that only the duration of the total filtration step and the duration of the ultrafiltration step were crucial variables for the optimization of the ultra- and diafiltration steps. The ‘best’ in silico purification process was found based on the performance criteria yield and solvent usage. The purity was required to be at least 99.9%.","Buffer exchange; Diafiltration; Downstream processing (DSP); High-throughput process development (HTPD); Mechanistic modelling; Ultrafiltration","en","journal article","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:d2a3c450-52ab-43ae-bad6-ba137911a0ba","http://resolver.tudelft.nl/uuid:d2a3c450-52ab-43ae-bad6-ba137911a0ba","Determinants of presence and removal of antibiotic resistance genes during WWTP treatment: A cross-sectional study","Pallares Vega, R. (TU Delft BT/Environmental Biotechnology; Wetsus, Centre for Sustainable Water Technology); Blaak, Hetty (Universiteit Utrecht); van der Plaats, Rozemarijn (Universiteit Utrecht); de Roda Husman, Ana M. (Universiteit Utrecht); Hernandez Leal, Lucia (Wetsus, Centre for Sustainable Water Technology); van Loosdrecht, Mark C.M. (TU Delft BT/Environmental Biotechnology); Weissbrodt, D.G. (TU Delft BT/Environmental Biotechnology); Schmitt, Heike (Wetsus, Centre for Sustainable Water Technology; Rijksinstituut voor Volksgezondheid en Milieu (RIVM); Universiteit Utrecht)","","2019","Wastewater treatment plants (WWTPs), linking human fecal residues and the environment, are considered as hotspots for the spread of antimicrobial resistance (AMR). In order to evaluate the role of WWTPs and underlying operational parameters for the removal of AMR, the presence and removal efficiency of a selected set of 6 antimicrobial resistance genes (ARGs) and 2 mobile genetic elements (MGEs) was evaluated by means of qPCR in influent and effluent samples from 62 Dutch WWTPs. The role of possible factors impacting the concentrations of ARGs and MGEs in the influent and their removal was identified through statistical analysis. ARGs and the class I integron-integrase gene (intI1) were, on average, removed to a similar extent (1.76 log reduction) or better (+0.30–1.90 logs) than the total bacteria (measured as 16S rRNA gene). In contrast, broad-host-range plasmids (IncP-1) had a significantly increased (p < 0.001) relative abundance after treatment. The presence of healthcare institutions in the area served did only slightly increase the concentrations of ARGs or MGEs in influent. From the extended panel of operational parameters, rainfall, increasing the hydraulic load of the plant, most significantly (p < 0.05) affected the treatment efficiency by decreasing it on average −0.38 logs per time the flow exceeded the average daily flow. Our results suggest that overall, WWTP treatments do not favor the proliferation of the assessed resistance genes but might increase the relative abundance of broad-host-range plasmids of the IncP-1 type.","ARGs; IncP plasmids; MGE; Process design; Rainfall; Removal efficiency; WWTPs","en","journal article","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:f267b8d5-414e-4769-9e23-86bf6317529c","http://resolver.tudelft.nl/uuid:f267b8d5-414e-4769-9e23-86bf6317529c","Aggregation Graph Neural Networks","Gama, F. (University of Pennsylvania); Marques, Antonio G. (King Juan Carlos University); Ribeiro, Alejandro (University of Pennsylvania); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2019","Graph neural networks (GNNs) regularize classical neural networks by exploiting the underlying irregular structure supporting graph data, extending its application to broader data domains. The aggregation GNN presented here is a novel GNN that exploits the fact that the data collected at a single node by means of successive local exchanges with neighbors exhibits a regular structure. Thus, regular convolution and regular pooling yield an appropriately regularized GNN. To address some scalability issues that arise when collecting all the information at a single node, we propose a multi-node aggregation GNN that constructs regional features that are later aggregated into more global features and so on. We show superior performance in a source localization problem on synthetic graphs and on the authorship attribution problem.","convolutional neural networks; graph neural networks; graph signal processing; network data","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-10-17","","","Signal Processing Systems","","",""
"uuid:c5834d23-0ef0-4397-b636-9a3f5d3caca9","http://resolver.tudelft.nl/uuid:c5834d23-0ef0-4397-b636-9a3f5d3caca9","Asynchronous Distributed Edge-Variant Graph Filters","Coutino, Mario (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2019","As the size of the sensor network grows, synchronization starts to become the main bottleneck for distributed computing. As a result, efforts in several areas have been focused on the convergence analysis of asynchronous computational methods. In this work, we aim to cross-pollinate distributed graph filters with results in parallel computing to provide guarantees for asynchronous graph filtering. To alleviate the possible reduction of convergence speed due to asynchronous updates, we also show how a slight modification to the graph filter recursion, through operator splitting, can be performed to obtain faster convergence. Finally, through numerical experiments the performance of the discussed methods is illustrated.","asynchronous filtering; distributed signal processing; edge-variant graph filters; graph filters; graph signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-01-04","","","Signal Processing Systems","","",""
"uuid:6db183f1-2c58-49ea-9d01-4255f3f9f5e3","http://resolver.tudelft.nl/uuid:6db183f1-2c58-49ea-9d01-4255f3f9f5e3","A machine learning approach to the design of customized shoe lasts","Booth, Brian G. (Universiteit Antwerpen); Sijbers, Jan (Universiteit Antwerpen); Huysmans, T. (TU Delft Applied Ergonomics and Design)","","2019","","3-D scanning; data processing; design; neural networks; shoe last","en","journal article","","","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:c122aa5b-78f5-4c4e-88d0-c5d18bd65d44","http://resolver.tudelft.nl/uuid:c122aa5b-78f5-4c4e-88d0-c5d18bd65d44","Application of MODFLOW with boundary conditions analyses based on limited available observations: A case study of Birjand plain in East Iran","Aghlmand, Reza (Ferdowsi University of Mashhad); Abbasi, A. (TU Delft Water Resources; Ferdowsi University of Mashhad)","","2019","Increasing water demands, especially in arid and semi-arid regions, continuously exacerbate groundwater resources as the only reliable water resources in these regions. Groundwater numerical modeling can be considered as an effective tool for sustainable management of limited available groundwater. This study aims to model the Birjand aquifer using GMS: MODFLOW groundwater flow modeling software to monitor the groundwater status in the Birjand region. Due to the lack of the reliable required data to run the model, the obtained data from the Regional Water Company of South Khorasan (RWCSK) are controlled using some published reports. To get practical results, the aquifer boundary conditions are improved in the established conceptual method by applying real/field conditions. To calibrate the model parameters, including the hydraulic conductivity, a semi-transient approach is applied by using the observed data of seven years. For model performance evaluation, mean error (ME), mean absolute error (MAE), and root mean square error (RMSE) are calculated. The results of the model are in good agreement with the observed data and therefore, the model can be used for studying the water level changes in the aquifer. In addition, the results can assist water authorities for more accurate and sustainable planning and management of groundwater resources in the Birjand region.","Birjand aquifer; Calibration process; GMS: MODFLOW; Groundwater modeling","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:7200eb5c-bff3-469b-b605-031cda6531a9","http://resolver.tudelft.nl/uuid:7200eb5c-bff3-469b-b605-031cda6531a9","A Three-Dimensional Array for the Study of Infrasound Propagation Through the Atmospheric Boundary Layer","Smink, M.M.E. (TU Delft Applied Geophysics and Petrophysics; Royal Netherlands Meteorological Institute (KNMI)); Assink, Jelle D. (Royal Netherlands Meteorological Institute (KNMI)); Bosveld, Fred C. (Royal Netherlands Meteorological Institute (KNMI)); Smets, P.S.M. (TU Delft Applied Geophysics and Petrophysics); Evers, L.G. (TU Delft Applied Geophysics and Petrophysics; Royal Netherlands Meteorological Institute (KNMI))","","2019","The Royal Netherlands Meteorological Institute (KNMI) operates a three-dimensional microbarometer array at the Cabauw Experimental Site for Atmospheric Research observatory. The array consists of five microbarometers on a meteorological tower up to an altitude of 200 m. Ten ground-based microbarometers surround the tower with an array aperture of 800 m. This unique setup allows for the study of infrasound propagation in three dimensions. The added value of the vertical dimension is the sensitivity to wind and temperature in the atmospheric boundary layer over multiple altitudes. In this study, we analyze infrasound generated by an accidental chemical explosion at the Moerdijk petrochemical plant on 3 June 2014. The recordings of the tower microbarometers show two sequential arrivals, whereas the recordings on the ground show one wavefront. This arrival structure is interpreted to be the upgoing and downgoing wavefronts. The observations are compared with propagation modeling results using global-scale and mesoscale atmospheric models. Independent temperature and wind measurements, which are available at the Cabauw Experimental Site for Atmospheric Research, are used for comparison with model output. The modeling results explain the signal arrival times; however, the tower wavefront arrivals are not explained. This study is important for understanding the influence of the atmospheric boundary layer on infrasound detections and propagation.","array processing; atmospheric boundary layer; atmospheric models; infrasound; propagation modeling","en","journal article","","","","","","","","2020-02-07","","","Applied Geophysics and Petrophysics","","",""
"uuid:271f203d-3de7-48c7-8e6c-a0c990b88539","http://resolver.tudelft.nl/uuid:271f203d-3de7-48c7-8e6c-a0c990b88539","An initial evaluation framework for the design and operational use of maritime STAMP-based safety management systems","Valdez Banda, Osiris A. (Aalto University); Goerlandt, Floris (Dalhousie University); Salokannel, Johanna (NOVIA University of Applied Science); van Gelder, P.H.A.J.M. (TU Delft Safety and Security Science)","","2019","A safety management system (SMS) is the common means used by organizations to assess organizational performance with respect to the safety and well-being of people, property and the natural ecosystem. A SMS provides confidence to diverse stakeholders that organizational safety is at an appropriate level and fulfils the applicable regulatory standards. As a multifaceted system for organizational safety assessment, ensurance and assurance, the evaluation of the design and operational use of SMS is a complex process. An evaluation needs to provide evidence about how well the design and operation of an SMS complies with applicable standards and how well the methods used in the SMS implementation support the organizational policies and practical work. In the maritime domain, SMS is broadly applied. However, there are few theoretically rooted SMS design approaches, and there is a lack of frameworks to evaluate how well the SMS is designed and how effectively it operates. This paper proposes an initial evaluation framework for the design and operational use of a maritime SMS design approach based on Systems-Theoretic Accident Model and Processes (STAMP), realist evaluation and Bayesian Networks. This framework is applied for a case study of vessel traffic services (VTS) Finland to test its relevance and ability to guide the SMS design. The experiences gained in the case study, and the related discussion on the framework, can guide further research in this area. Ultimately, the work can be used as a basis for developing maritime SMS auditing processes, based on specific theoretical and methodological approaches.","Evaluation framework; Maritime safety; Safety management system; Systems-Theoretic Accident Model and Processes (STAMP); Vessel traffic services","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:8da08da3-208c-41ea-bcab-6788d4c289f5","http://resolver.tudelft.nl/uuid:8da08da3-208c-41ea-bcab-6788d4c289f5","Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft","Pasqualetto Cassinis, L. (TU Delft Space Systems Egineering); Fonod, R. (TU Delft Space Systems Egineering); Gill, E.K.A. (TU Delft Space Engineering)","","2019","The relative pose estimation of an inactive target by an active servicer spacecraft is a critical task in the design of current and planned space missions, due to its relevance for close-proximity operations, i.e. the rendezvous with a space debris and/or in-orbit servicing. Pose estimation systems based solely on a monocular camera are recently becoming an attractive alternative to systems based on active sensors or stereo cameras, due to their reduced mass, power consumption and system complexity. In this framework, a review of the robustness and applicability of monocular systems for the pose estimation of an uncooperative spacecraft is provided. Special focus is put on the advantages of multispectral monocular systems as well as on the improved robustness of novel image processing schemes and pose estimation solvers. The limitations and drawbacks of the validation of current pose estimation schemes with synthetic images are further discussed, together with the critical trade-offs for the selection of visual-based navigation filters. The state-of-the-art techniques are analyzed in order to provide an insight into the limitations involved under adverse illumination and orbit scenarios, high image contrast, background noise, and low signal-to-noise ratio, which characterize actual space imagery, and which could jeopardize the image processing algorithms and affect the pose estimation accuracy as well as the navigation filter's robustness. Specifically, a comparative assessment of current solutions is given at different levels of the pose estimation process, in order to bring a novel and broad perspective as compared to previous works.","Active debris removal; Image processing; In-orbit servicing; Monocular cameras; Relative pose estimation; Visual-based navigation filters","en","review","","","","","","","","2020-06-15","","Space Engineering","Space Systems Egineering","","",""
"uuid:efd63210-7d9c-4e18-848c-7f9382f603c0","http://resolver.tudelft.nl/uuid:efd63210-7d9c-4e18-848c-7f9382f603c0","From thin to extra-thick adhesive layer thicknesses: Fracture of bonded joints under mode I loading conditions","Lopes Fernandes, R. (TU Delft Structural Integrity & Composites); Teixeira De Freitas, S. (TU Delft Structural Integrity & Composites); Budzik, Michal K.; Poulis, J.A. (TU Delft Adhesion Institute); Benedictus, R. (TU Delft Structural Integrity & Composites)","","2019","The fracture behaviour of joints bonded with a structural epoxy adhesive and bond line thicknesses of 0.1–4.5 mm has been studied. However, limited research is found on similar joints with thicker bond lines, which are relevant for maritime applications. Therefore, the effect of the adhesive bond line thickness, varying from 0.4 to 10.1 mm, on the mode I fracture behaviour of steel to steel joints bonded with a structural epoxy adhesive was investigated in this study. An experimental test campaign of double-cantilever beam (DCB) specimens was carried out in laboratory conditions. Five bond line thicknesses were studied: 0.4, 1.1, 2.6, 4.1 and 10.1 mm. Analytical predictions of the experimental load-displacement curves were performed based on the Simple Beam Theory (SBT), the Compliance Calibration Method (CCM) and the Penado-Kanninen (P-K) model. The P-K model was used to determine the mode I strain energy release rate (SERR). The average mode I SERR, G Iav., presented similar values for the specimens with adhesive bond line thicknesses of 0.4, 1.1 and 2.6 mm (G I av.=0.71, 0.61, 0.63 N/mm, respectively). However, it increased by approximately 63% for 4.1 mm (G I av.=1.16 N/mm) and decreased by about 10% (in comparison with 4.1 mm) for the 10.1 mm (G I av.=1.04 N/mm). The trend of the G Iav. in relation to the bond line thickness is explained by the combination of three factors: the crack path location, the failure surfaces features and the stress field ahead of the crack tip.","Extra-thick bond lines; Fracture process zone; Fracture toughness; Mode I","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-08-09","","","Structural Integrity & Composites","","",""
"uuid:0ca117b3-6ee4-4267-a581-984fc108d37a","http://resolver.tudelft.nl/uuid:0ca117b3-6ee4-4267-a581-984fc108d37a","DeepSHM: A deep learning approach for structural health monitoring based on guided Lamb wave technique","Ewald, Vincentius (TU Delft Structural Integrity & Composites); Groves, R.M. (TU Delft Structural Integrity & Composites); Benedictus, R. (TU Delft Structural Integrity & Composites)","Lynch, Jerome P. (editor); Sohn, Hoon (editor); Wang, Kon-Well (editor); Huang, Haiying (editor)","2019","In our previous work, we demonstrated how to use inductive bias to infuse a convolutional neural network (CNN) with domain knowledge from fatigue analysis for aircraft visual NDE. We extend this concept to SHM and therefore in this paper, we present a novel framework called DeepSHM which involves data augmentation of captured sensor signals and formalizes a generic method for end-to-end deep learning for SHM. The study case is limited to ultrasonic guided waves SHM. The sensor signal response from a Finite-Element-Model (FEM) is pre-processed through wavelet transform to obtain the wavelet coefficient matrix (WCM), which is then fed into the CNN to be trained to obtain the neural weights. In this paper, we present the results of our investigation on CNN complexities that is needed to model the sensor signals based on simulation and experimental testing within the framework of DeepSHM concept.","convolutional neural network (CNN); damage classification; deep learning; Finite-Element-Modelling (FEM); guided Lamb wave; signal processing; Structural Health Monitoring (SHM)","en","conference paper","SPIE","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:f038eaaa-71c2-4816-9c78-fc2c03ba2110","http://resolver.tudelft.nl/uuid:f038eaaa-71c2-4816-9c78-fc2c03ba2110","Transdisciplinary systems engineering: Implications, challenges and research agenda","Wognum, Nel (TU Delft Air Transport & Operations); Bil, Cees (Royal Melbourne Institute of Technology University); Elgh, Fredrik (Jönköping University); Peruzzini, Margherita (Università Degli Studi di Modena e Reggio Emilia); Stjepandić, Josip (PROSTEP AG); Verhagen, W.J.C. (TU Delft Air Transport & Operations)","","2019","Transdisciplinary processes have been the subject of research since several decades already. Transdisciplinary processes are aimed at solving ill-defined and socially relevant problems. Many researchers have studied transdisciplinary processes and have tried to understand the essentials of transdisciplinarity. Many engineering problems can be characterised as ill-defined and socially relevant, too. Although transdisciplinary engineering cannot widely be found in the literature yet, a transdisciplinary approach is deemed relevant for many engineering problems. With this paper we aim to present an overview of the literature on research into transdisciplinary processes and investigate the relevance of a transdisciplinary approach in engineering domains. After a brief description of past research on transdisciplinarity, implications for engineering research, engineering practice, and engineering education are identified. In all three areas, the current situation is described, while challenges are identified that still exist. The paper ends with a research agenda for transdisciplinary engineering.","Engineering education; Industry 4.0; Project-based learning; Social relevance; Transdisciplinary collaboration; Transdisciplinary engineering; Transdisciplinary processes; Transdisciplinary research; Transdisciplinary systems","en","journal article","","","","","","","","2020-04-01","","","Air Transport & Operations","","",""
"uuid:14bdcd50-350f-4edc-8436-88d2f4b28f2a","http://resolver.tudelft.nl/uuid:14bdcd50-350f-4edc-8436-88d2f4b28f2a","Multi-objective optimization of Resin Infusion","Struzziero, G. (TU Delft Aerospace Manufacturing Technologies); Skordos, A. A. (Cranfield University)","","2019","The present paper addresses the multi-objective optimization of the filling stage of the Resin Infusion manufacturing process. The optimization focuses on the selection of an optimal temperature profile which addresses the tradeoff between filling time and the risk of impeding the flow of resin due to excessive curing. The methodology developed combines a numerical solution of the coupled Darcy’s flow and heat conduction problem with a Genetic Algorithm (GA). The methodology converges successfully to a final Pareto set for the case of a C-stiffener which is 130 mm high, 60 mm wide and lies on a skin 280 mm wide. The results highlight the efficiency opportunities available compared to standard industrial manufacturing practice. Reductions in filling time up to 66% and up to 15% in final degree of cure are achieved compared to standard solutions.","composites manufacturing; finite elements; flow through porous media; multi-objective optimization; process simulation; Resin Infusion; thermosetting resin; viscosity; OA-Fund TU Delft","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:f985736b-1c1a-4607-b90a-06685785c787","http://resolver.tudelft.nl/uuid:f985736b-1c1a-4607-b90a-06685785c787","Effect of fabric architecture, compaction and permeability on through thickness thermoplastic melt impregnation","Studer, Julia (University of Applied Sciences and Arts Northwestern Switzerland; Hamburg University of Technology); Dransfeld, C.A. (TU Delft Aerospace Manufacturing Technologies; University of Applied Sciences and Arts Northwestern Switzerland); Jauregui Cano, Jon (University of Applied Sciences and Arts Northwestern Switzerland); Keller, Andre (University of Applied Sciences and Arts Northwestern Switzerland); Wink, Marianne (University of Applied Sciences and Arts Northwestern Switzerland); Masania, K. (ETH Zürich); Fiedler, Bodo (Hamburg University of Technology)","","2019","To reduce the cycle time of structural, automotive thermoplastic composites, we investigated the potential of direct thermoplastic melt impregnation of glass fabrics using an injection moulding process. At the high pressures that occur during the process, the effect of the fabric architecture on the impregnation, compaction, volume fraction and permeability of two unidirectional fabrics was studied. Using impregnation experiments with a low viscosity PA6 melt, we identified a favourable processing window resulting in an impregnation time of 5 min. The impregnation experiments with thermoplastic melts demonstrate that textile architectures promoting dual scale flow during impregnation are favourable for complete filling. Based on our findings, thermoplastic compression resin transfer moulding is an efficient processing route for automated production of composite parts with a high fibre volume fraction, if the fabric architecture is adapted for higher processing pressures and by fully utilising dual scale flow.","Compression resin transfer moulding; E. Manufacturing/Processing: Injection moulding; Fibre tow infiltration; Liquid composite moulding","en","journal article","","","","","","","","2021-04-19","","","Aerospace Manufacturing Technologies","","",""
"uuid:04bdaa74-7bf0-4166-ab02-420efdd90193","http://resolver.tudelft.nl/uuid:04bdaa74-7bf0-4166-ab02-420efdd90193","Comparison of three 3D scanning techniques for paintings, as applied to Vermeer’s ‘Girl with a Pearl Earring’","Elkhuizen, W.S. (TU Delft Mechatronic Design); Dore-Callewaert, T.W.J. (TU Delft (OLD) MSE-4); Leonhardt, Emilien (Hirox Europe/Jyfel Corporation); Vandivere, Abbie (Universiteit van Amsterdam); Song, Y. (TU Delft Mechatronic Design); Pont, S.C. (TU Delft Human Information Communication Design); Geraedts, Jo M.P. (TU Delft Mechatronic Design); Dik, J. (TU Delft (OLD) MSE-4)","","2019","A seventeenth-century canvas painting is usually comprised of varnish and (translucent) paint layers on a substrate. A viewer’s perception of a work of art can be affected by changes in and damages to these layers. Crack formation in the multi-layered stratigraphy of the painting is visible in the surface topology. Furthermore, the impact of mechanical abrasion, (photo)chemical processes and treatments can affect the topography of the surface and thereby its appearance. New technological advancements in non-invasive imaging allow for the documentation and visualisation of a painting’s 3D shape across larger segments or even the complete surface. In this manuscript we compare three 3D scanning techniques, which have been used to capture the surface topology of Girl with a Pearl Earring by Johannes Vermeer (c. 1665): a painting in the collection of the Mauritshuis, the Hague. These three techniques are: multi-scale optical coherence tomography, 3D scanning based on fringe-encoded stereo imaging (at two resolutions), and 3D digital microscopy. Additionally, scans were made of a reference target and compared to 3D data obtained with white-light confocal profilometry. The 3D data sets were aligned using a scale-invariant template matching algorithm, and compared on their ability to visualise topographical details of interest. Also the merits and limitations for the individual imaging techniques are discussed in-depth. We find that the 3D digital microscopy and the multi-scale optical coherence tomography offer the highest measurement accuracy and precision. However, the small field-of-view of these techniques, makes them relatively slow and thereby less viable solutions for capturing larger (areas of) paintings. For Girl with a Pearl Earring we find that the 3D data provides an unparalleled insight into the surface features of this painting, specifically related to ‘moating’ around impasto, the effects of paint consolidation in earlier restoration campaigns and aging, through visualisation of the crack pattern. Furthermore, the data sets provide a starting point for future documentation and monitoring of the surface topology changes over time. These scans were carried out as part of the research project ‘The Girl in the Spotlight’.","3D digital microscopy; 3D scanning; Cultural heritage; Image processing; Image registration; Optical coherence tomography; Painting; Topography; OA-Fund TU Delft","en","journal article","","","","","","","","","","","Mechatronic Design","","",""
"uuid:a5052241-299c-4717-9b2b-a3b148016963","http://resolver.tudelft.nl/uuid:a5052241-299c-4717-9b2b-a3b148016963","Application of 3D scanning in design education","Lee, Wonsup (Handong Global University); Molenbroek, J.F.M. (TU Delft Applied Ergonomics and Design); Goto, L. (TU Delft Applied Ergonomics and Design); Jellema, A.H. (TU Delft Applied Ergonomics and Design); Song, Y. (TU Delft Mechatronic Design); Goossens, R.H.M. (TU Delft Applied Ergonomics and Design; TU Delft Industrial Design)","Scataglini, Sofia (editor); Paul, Gunther (editor)","2019","Three-dimensional scanning technologies have brought great opportunities in ergonomic and product design education as well as research. Not only the anthropometric size but also the shape and posture of the human, form of a product, or interactions between the human and product obtained based on the 3D scanning have been usefully applied in product design. This chapter introduces a number of educational and research cases, which have been performed at the Faculty of Industrial Design Engineering at Delft University of Technology. First, as ergonomics plays a big role in the product design process, but in a different and advanced way than before, we have broadly applied the emerging 3D scanning technology in our design education and research. Because the topic of “ergonomic design based on 3D scanning” have been taught in our education, the number of students who are using 3D human scans for their course work and/or graduate project has increased considerably. Some of our successful cases will be introduced in this chapter. Second, from the 3D scanning practices in our education, we concluded there is a need of a 3D scanner, especially for the human hand, that is both quick and accurate but is also capable of scanning parts that are normally hard to cover. Multiple final master projects have contributed to the development of a working prototype of an accurate and low-cost 3D hand scanner. Finally, based on our experience, techniques, methods, software, and relevant information that can support design education based on 3D human scans will be discussed.","3D hand scanner; 3D image processing; 3D scanning; Ankle protector; Arm orthosis; Customized bra; Design education; Digital human modeling; Dined; Ergonomic design; Face mask; Helmet; Highly bicycle; Insole design; Sizing analysis software; Virtual fit analysis","en","book chapter","Academic Press","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2020-02-23","","Industrial Design","Applied Ergonomics and Design","","",""
"uuid:30e991b3-ec01-448e-ba55-0ea36359ae17","http://resolver.tudelft.nl/uuid:30e991b3-ec01-448e-ba55-0ea36359ae17","Exploring determinants influencing a service-oriented enterprise strategy: An executive management view","Plugge, A.G. (TU Delft Information and Communication Technology); Janssen, M.F.W.H.A. (TU Delft Information and Communication Technology)","Kotlarsky, Julia (editor); Oshri, Ilan (editor); Willcocks, Leslie (editor)","2019","Due to the convergence of rapid business developments and digitization challenges firms need to become more agile. A service-oriented enterprise (SOE) strategy is an approach that decomposes an enterprise into business services that are modular, accessible, and interoperable, in which parts can be provided in-house, or outsourced to the market. The SOE concept has mainly been approached from a technological view and little is known about what type of strategic SOE determinants are relevant. A firm’s strategy to implement an SOE requires top management support. Therefore, insights at executive level are a prerequisite to identify strategic business directions. We conducted a literature review and a qualitative case study amongst eleven firms at executive level in various industries. Business services, business processes, and enabling technology were found in the literature as key determinants influencing a firm’s SOE strategy. Subsequently, the interviews at executive level identified that organizational readiness, knowledge and skills, and governance also affect the SOE strategy of firms. We suggest that a holistic view is required to study the complexity of an SOE. By using an executive view we contribute to IS and business literature as strategic SOE determinants become more explicit.","Business processes; Business services; Enabling technology; Service-oriented enterprise; Strategic decision-making","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-09-06","","","Information and Communication Technology","","",""
"uuid:3b886ba2-1c3e-4ff2-b91c-7ff32e2bcf3c","http://resolver.tudelft.nl/uuid:3b886ba2-1c3e-4ff2-b91c-7ff32e2bcf3c","The representation of speech and its processing in the human brain and deep neural networks","Scharenborg, O.E. (TU Delft Multimedia Computing)","Salah, Albert Ali (editor); Karpov, Alexey (editor); Potapova, Rodmonga (editor)","2019","For most languages in the world and for speech that deviates from the standard pronunciation, not enough (annotated) speech data is available to train an automatic speech recognition (ASR) system. Moreover, human intervention is needed to adapt an ASR system to a new language or type of speech. Human listeners, on the other hand, are able to quickly adapt to nonstandard speech and can learn the sound categories of a new language without having been explicitly taught to do so. In this paper, I will present comparisons between human speech processing and deep neural network (DNN)-based ASR and will argue that the cross-fertilisation of the two research fields can provide valuable information for the development of ASR systems that can flexibly adapt to any type of speech in any language. Specifically, I present results of several experiments carried out on both human listeners and DNN-based ASR systems on the representation of speech and lexically-guided perceptual learning, i.e., the ability to adapt a sound category on the basis of new incoming information resulting in improved processing of subsequent speech. The results showed that DNNs appear to learn structures that humans use to process speech without being explicitly trained to do so, and that, similar to humans, DNN systems learn speaker-adapted phone category boundaries from a few labelled examples. These results are the first steps towards building human-speech processing inspired ASR systems that, similar to human listeners, can adjust flexibly and fast to all kinds of new speech.","Adaptation; Deep neural networks; Human speech processing; Non-standard speech; Perceptual learning; Speech representations","en","conference paper","Springer","","","","","","","","","","Multimedia Computing","","",""
"uuid:703cf600-6af5-41a8-8c36-ee76a4805da1","http://resolver.tudelft.nl/uuid:703cf600-6af5-41a8-8c36-ee76a4805da1","Network localization is unalterable by infections in bursts","Liu, Q. (TU Delft Network Architectures and Services); Van Mieghem, P.F.A. (TU Delft Network Architectures and Services)","","2018","To shed light on the disease localization phenomenon, we study a bursty susceptible-infected-susceptible (SIS) model and analyze the model under the mean-field approximation. In the bursty SIS model, the infected nodes infect all their neighbors periodically, and the near-threshold steady-state prevalence is non-constant and maximized by a factor equal to the largest eigenvalue λ1 of the adjacency matrix of the network. We show that the maximum near-threshold prevalence of the bursty SIS process on a localized network tends to zero even if λ1 diverges in the thermodynamic limit, which indicates that the burst of infection cannot turn a localized spreading into a delocalized spreading. Our result is evaluated both on synthetic and real networks.","Complex networks; localization; epidemic process; susceptible-infected-susceptible model","en","journal article","","","","","","Accepted author manuscript","","","","","Network Architectures and Services","","",""
"uuid:d2c85ed0-3f87-47d7-84b6-8204e8bc2f16","http://resolver.tudelft.nl/uuid:d2c85ed0-3f87-47d7-84b6-8204e8bc2f16","A new mixed mode I/II failure criterion for laminated composites considering fracture process zone","Daneshjoo, Z. (Iran University of Science and Technology); Shokrieh, M. M. (Iran University of Science and Technology); Fakoor, M. (University of Tehran); Alderliesten, R.C. (TU Delft Structural Integrity & Composites)","","2018","In this paper, by considering the absorbed energy in the fracture process zone and extension of the minimum strain energy density theory for orthotropic materials, a new mixed mode I/II failure criterion was proposed. The applicability of the new criterion, to predict the crack growth in both laminated composites and wood species, was investigated. By defining a suitable damage factor and using the mixed mode I/II micromechanical bridging model, the absorbed energy in the fracture process zone was considered. It caused the new criterion to be more compatible with the nature of the failure phenomena in orthotropic materials, unlike available ones that were conservative. A good agreement was obtained between the fracture limit curves extracted by the present criterion and the available experimental data. The theoretical results were also compared with those of the minimum strain energy density criterion to show the superiority of the newly proposed criterion.","Delamination; Failure criterion; Fracture process zone; Laminated composite; Mixed mode I/II loading","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-04-01","","","Structural Integrity & Composites","","",""
"uuid:4f46e987-87a6-4f66-afa7-de9eacb8dc29","http://resolver.tudelft.nl/uuid:4f46e987-87a6-4f66-afa7-de9eacb8dc29","Low power IC design characterization techniques under process variations","Zandrahimi, M. (TU Delft Computer Engineering)","Al-Ars, Z. (promotor); Bertels, K.L.M. (promotor); Delft University of Technology (degree granting institution)","2018","To overcome the increasing sensitivity to variability in nanoscale integrated circuits, operation parameters (e.g., supply voltage) are adapted in a customized way exclusively to each chip. AVS is a standard industrial technique which has been adopted widely to compensate for process, voltage, and temperature variations as well as power optimization of integrated circuits. For cost and complexity reasons, AVS techniques are usually implemented by means of on-chip performance monitors (so-called PMBs) allowing fast performance evaluation during production or run time. Such on-chip monitoring approaches estimate operation parameters either based on responses from performance monitors with no interaction with the circuit or by monitoring the actual critical paths of the circuit. In this thesis, we focus on AVS techniques, which estimate operation parameters using responses from on-chip performance monitors with no interaction with the circuit during production. We discuss the challenges that these monitoring methodologies face with decreasing node sizes, in terms of accuracy and effectiveness. We show that the accuracy of these approaches is design dependent, and requires up to 15% added design margin. In addition, we show using silicon measurements of a nanometric FD-SOI device that the required design margin is above 10% of the clock cycle, which leads to significant waste of power. In this thesis, we introduce the new method of using delay test patterns including TF, SDD, and PDLY test patterns for application of AVS during IC production. The proposed method is able to eliminate the need for PMBs, while improving the accuracy of performance estimation. The basic requirement of using delay-based AVS is that there should be a reasonable correlation between the frequency the chip can attain while passing all delay test patterns and the actual frequency of the chip. Based on simulation results of ISCAS’99 benchmarks with a 28 nm FD-SOI library, using delay test patterns result in an error of 5.33% for TF testing, an error of 3.96% for SDD testing, and an error as low as 1.85% using PDLY testing. Accordingly, PDLY patterns have the capacity to achieve the lowest error in performance estimation, followed by SDD patterns and finally TF patterns. We performed the same analysis using a 65 nm technology node, which showed the same results. We also did two different silicon measurements on a 28 nm FD-SOI CPU to investigate the effectiveness of the TF-based approach. The results of the first case study on real silicon comparing the performance estimation using functional test patterns and the TF-based approach show a very close correlation between the two, which proves the effectiveness of the TF approach. The second case study compares the accuracy of voltage estimation using PMBs and the TF-based approach. The results show that the PMB approach can only account for 85% of the uncertainty in voltage measurements, which results in considerable power waste. In comparison, the TF-based approach can account for 99% of that uncertainty, thereby providing the ability to reducing that wasted power.","Adaptive voltage scaling; process variations; performance estimation; process monitoring boxes; delay testing; transition fault testing; path delay testing","en","doctoral thesis","","","","","","","","","","","Computer Engineering","","",""
"uuid:a2d89616-1fe7-4fd7-a0b3-6941f68db857","http://resolver.tudelft.nl/uuid:a2d89616-1fe7-4fd7-a0b3-6941f68db857","An effective approach for rotor electrical asymmetry detection in wind turbine DFIGs","Ibrahim, Raed Khalaf (Loughborough University); Watson, S.J. (TU Delft Wind Energy); Djurović, Siniša (The University of Manchester); Crabtree, C.J. (TU Delft Wind Energy; Durham University)","","2018","Determining the magnitude of particular fault signature components (FSCs) generated by wind turbine (WT) faults from current signals has been used as an effective way to detect early abnormalities. However, the WT current signals are time varying due to the constantly varying generator speed. The WT frequently operates with the generator close to the synchronous speed, resulting in FSCs manifesting themselves in the vicinity of the supply frequency and its harmonics, making their detection more challenging. To address this challenge, the detection of rotor electrical asymmetry in WT doubly fed induction generators, indicative of common winding, brush gear, or high resistance connection faults, has been investigated using a test rig under three different driving conditions, and then an effective extended Kalman filter (EKF) based method is proposed to iteratively estimate the FSCs and track their magnitudes. The proposed approach has been compared with a continuous wavelet transform (CWT) and an iterative localized discrete Fourier-transform (IDFT). The experimental results demonstrate that the CWT and IDFT algorithms fail to track the FSCs at low load operation near-synchronous speed. In contrast, the EKF was more successful in tracking the FSCs magnitude in all operating conditions, unambiguously determining the severity of the faults over time and providing significant gains in both computational efficiency and accuracy of fault diagnosis.","Condition monitoring (CM); continuous wavelet transform (CWT); doubly fed induction generators (DFIGs); extended Kalman filter (EKF); fault diagnosis; Fourier transform; induction generators; signal processing; time-frequency analysis; wavelet transforms; wind power generation; wind turbines (WTs)","en","journal article","","","","","","","","","","","Wind Energy","","",""
"uuid:96d12ec6-03a5-41a6-979f-6692bd6fd43f","http://resolver.tudelft.nl/uuid:96d12ec6-03a5-41a6-979f-6692bd6fd43f","Improved process representation in the simulation of the hydrology of a meso-scale semi-arid catchment","Okello, Aline M.L.Saraiva (IHE Delft Institute for Water Education; University of KwaZulu-Natal); Masih, Ilyas (IHE Delft Institute for Water Education); Uhlenbrook, S. (TU Delft Water Resources; IHE Delft Institute for Water Education; UNESCO); Jewitt, Graham P.W. (University of KwaZulu-Natal); van der Zaag, P. (TU Delft Water Resources; IHE Delft Institute for Water Education)","","2018","The variability of rainfall and climate, combined with land use and land cover changes, and variation in geology and soils makes it a difficult task to accurately describe the key hydrological processes in a catchment. With the aim to better understand the key hydrological processes and runoff generation mechanisms in the semi-arid meso-scale Kaap catchment in South Africa, a hydrological model was developed using the open source STREAM model. Dominant runoff processes were mapped using a simplified Height Above the Nearest Drainage approach combined with geology. The Prediction in Ungauged Basins (PUB) framework of runoff signatures was used to analyse the model results. Results show that in the headwater sub-catchments of Noordkaap and Suidkaap, plateaus dominate, associated with slow flow processes. Therefore, these catchments have high baseflow components and are likely the main recharge zone for regional groundwater in the Kaap. In the Queens sub-catchment, hillslopes associated with intermediate and fast flow processes dominate. However, this catchment still has a strong baseflow component, but it seems to be more impacted by evaporation depletion, due to different soils and geology, especially in drier years. At the Kaap outlet, the model indicates that hillslopes are important, with intermediate and fast flow processes dominating and most runoff being generated through direct runoff and shallow groundwater components, particularly in wetter months and years. There is a high impact of water abstractions and evaporation during the dry season, affecting low flows in the catchment. Results also indicate that the root zone storage and the parameters of effective rainfall separation (between unsaturated and saturated zone), quickflow coefficient and capillary rise, were very sensitive in the model. The inclusion of capillary rise (feedback from the saturated to unsaturated zone) greatly improved the simulation results.","Hydrological modelling; Kaap River; Runoff processes; Semi-arid catchment; Southern Africa; STREAM model","en","journal article","","","","","","","","","","","Water Resources","","",""
"uuid:5f58db2e-4538-426a-8c1a-2e505f7e34a7","http://resolver.tudelft.nl/uuid:5f58db2e-4538-426a-8c1a-2e505f7e34a7","Scotty: Efficient window aggregation for out-of-order stream processing","Traub, Jonas (Technical University of Berlin); Grulich, Philipp Marian (DFKI GmbH); Rodriguez Cuellar, Alejandro (Technical University of Berlin); Bress, Sebastian (Technical University of Berlin; DFKI GmbH); Katsifodimos, A (TU Delft Web Information Systems); Rabl, Tilmann (Technical University of Berlin; DFKI GmbH); Markl, Volker (Technical University of Berlin; DFKI GmbH)","","2018","Computing aggregates over windows is at the core of virtually every stream processing job. Typical stream processing applications involve overlapping windows and, therefore, cause redundant computations. Several techniques prevent this redundancy by sharing partial aggregates among windows. However, these techniques do not support out-of-order processing and session windows. Out-of-order processing is a key requirement to deal with delayed tuples in case of source failures such as temporary sensor outages. Session windows are widely used to separate different periods of user activity from each other. In this paper, we present Scotty, a high throughput operator for window discretization and aggregation. Scotty splits streams into non-overlapping slices and computes partial aggregates per slice. These partial aggregates are shared among all concurrent queries with arbitrary combinations of tumbling, sliding, and session windows. Scotty introduces the first slicing technique which (1) enables stream slicing for session windows in addition to tumbling and sliding windows and (2) processes out-of-order tuples efficiently. Our technique is generally applicable to a broad group of dataflow systems which use a unified batch and stream processing model. Our experiments show that we achieve a throughput an order of magnitude higher than alternative state-of-The-Art solutions.","Aggregate sharing; Aggregation; out of order; Scotty; Session; Session Windows; Slicing; Stream; Stream Processing; Stream Slicing; Window","en","conference paper","Institute of Electrical and Electronics Engineers (IEEE)","","","","","","","","","","Web Information Systems","","",""
"uuid:427b434d-6e9b-4fd9-81d8-49a265cd90ac","http://resolver.tudelft.nl/uuid:427b434d-6e9b-4fd9-81d8-49a265cd90ac","Benchmarking Distributed Stream Data Processing Systems","Karimov, Jeyhun (German Research Centre for Artificial Intelligence (DFKI)); Rabl, Tilmann (Technical University of Berlin; German Research Centre for Artificial Intelligence (DFKI)); Katsifodimos, A (TU Delft Web Information Systems); Samarev, Roman (German Research Centre for Artificial Intelligence (DFKI)); Heiskanen, Henri (Rovio Entertainment); Markl, Volker (German Research Centre for Artificial Intelligence (DFKI); Technical University of Berlin)","","2018","The need for scalable and efficient stream analysis has led to the development of many open-source streaming data processing systems (SDPSs) with highly diverging capabilities and performance characteristics. While first initiatives try to compare the systems for simple workloads, there is a clear gap of detailed analyses of the systems' performance characteristics. In this paper, we propose a framework for benchmarking distributed stream processing engines. We use our suite to evaluate the performance of three widely used SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our evaluation focuses in particular on measuring the throughput and latency of windowed operations, which are the basic type of operations in stream analytics. For this benchmark, we design workloads based on real-life, industrial use-cases inspired by the online gaming industry. The contribution of our work is threefold. First, we give a definition of latency and throughput for stateful operators. Second, we carefully separate the system under test and driver, in order to correctly represent the open world model of typical stream processing deployments and can, therefore, measure system performance under realistic conditions. Third, we build the first benchmarking framework to define and test the sustainable performance of streaming systems. Our detailed evaluation highlights the individual characteristics and use-cases of each system.","Apache Flink; Apache Spark; Apache Storm; Stream benchmark; Stream data processing","en","conference paper","IEEE","","","","","","","","","","Web Information Systems","","",""
"uuid:f613079c-90a1-47dc-afcb-f6833646ca5a","http://resolver.tudelft.nl/uuid:f613079c-90a1-47dc-afcb-f6833646ca5a","LQG and Gaussian process techniques: For fixed-structure wind turbine control","Bijl, H.J. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); van Wingerden, J.W. (promotor); Delft University of Technology (degree granting institution)","2018","Wind turbines are growing bigger to becomemore cost-efficient. This does increase the severity of the vibrations that are present in the turbine blades, both due to predictable effects like wind shear and tower shadow, and due to less predictable effects like turbulence and flutter. If wind turbines are to become bigger and more cost-efficient, these vibrations need to be reduced. This can be done by installing trailing-edge flaps to the blades. Because of the variety of circumstances which the turbine should operate in, this results in large uncertainties. As such, we need methods that can take stochastic effects into account. Preferably we develop an algorithmthat can learn from online data how the flaps affect the wind turbine and how to optimally control them. A simple prior analysis can be done using a linearized version of the system. In this case it is important to know not only the expected cost (damage) that will be incurred by the wind turbine in various situations, but also the spread of this cost. This can for instance be done by looking at the variance of the cost function. Various expressions are available to analytically calculate this variance. Alternatively, we can prescribe a degree of stability for the system. Due to the limitations of linear approximations of systems, it is more effective to apply nonlinear regression methods. A promising one is Gaussian Process (GP) regression. Given a training set (X, y) it can predict function values f (x¤) for test points x¤. It has its basis in Bayesian probability theory, which allows it to not only make this prediction, but also give information (the variance) about its accuracy. The usual way in which GP regression is applied has a few important limitations. Most importantly, it is computationally intensive, especially when applied to constantly growing data sets. In addition, it has difficulties dealing with noise present in the training input points x. There are methods to solve either of these issues, but these tricks generally do not work well together, or their combination requires many computational resources. However, by making the right approximations, like Taylor expansions and at times even linearizations, Gaussian process regression can be applied efficiently, in an online way, to data sets with noisy input points. This enables GP regression to be used for system identification problems like online non-linear black-box modeling. Another limitation is that it can be difficult to find the optimum of a Gaussian process. The reason is that the optimum of a Gaussian process is not a fixed point but a random variable. The distribution of this optimum cannot be calculated analytically, but we can use particle methods to approximate it. We can subsequently use this principle to efficiently explore an unknown nonlinear function, trying to locate its optimum. To do so, we sample a point x from the optimum distribution, measure what the function value f (x) at this point is, update the Gaussian process approximation of the function, update the optimum distribution and repeat this process until the distribution has converged. Finding the optimum of a function like this has shown to have competitive performance at keeping the cumulative regret low, compared to similar algorithms. In addition, it allows wind turbines to tune the gains of a fixed-structure controller so as to optimize a nonlinear cost function like the damage equivalent load. All these improvements are a step forward in the application of Gaussian process regression to wind turbine applications. But as is always the case with research, there are still many things left to improve further.","Gaussian processes; regression; machine learning; optimization; system identification; automatic control; wind energy; smart rotor","en","doctoral thesis","","978-94-6299-501-7","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:9817f0fc-52f0-4a3c-82ce-04a804227de6","http://resolver.tudelft.nl/uuid:9817f0fc-52f0-4a3c-82ce-04a804227de6","An overview of some recent developments in glass science and their relevance to quality control in the glass industry","Veer, F.A. (TU Delft Structural Design & Mechanics); Bristogianni, T. (TU Delft Applied Mechanics); Justino de Lima, C.L. (TU Delft Applied Mechanics)","","2018","The classical image of glass is that of a rigid, transparent brittle material characterized by a non-crystalline microstructure. This 19th and 20th century image however is mostly based on the contrast between soda lime glass and metals. It does not really make sense in the 21th century where more modern testing methods have increased our understanding of the physiochemistry of glass. Based on recent results and the development of computational molecular dynamic software modelling a new approach to the physiochemistry of glass is outlined. The consequences this view has on glass properties and processing are explained.","Glass structure; hot working glass; glass processing; effect of glass composition","en","journal article","","","","","","","","","","","Structural Design & Mechanics","","",""
"uuid:8c32e48d-c06e-43e8-89d5-681fb2015873","http://resolver.tudelft.nl/uuid:8c32e48d-c06e-43e8-89d5-681fb2015873","Processes in Cadastre: Process Model for Serbian 3D Cadastre","Sladic, Dubravka; Radulovic, Aleksandra; Govedarica, Miro","","2018","Identifying the processes in the cadastre enables understanding the principles on which the cadastre works and the needs for its improvement. These processes define the way how the cadastre manages information and what are the prerequisites for the data to be stored in the appropriate data structure. The first step in determining the set of processes is defining business tasks in a cadastre that arise from the needs of different users - internal in the cadastre and external, like right holders, the Government and many other organizations. These needs define business tasks and data sets necessary to successfully perform the task. The next step is to define the process itself, and then implement the process in the appropriate architecture. Navratil and Andrew (2004) organize processes in the cadastre in two groups: processes that change the data in the system and the processes by which data are downloaded or viewed. The analysis of business processes in the Serbian cadastre shows that such basic process division is applicable as an initial step in the process hierarchy. A top-down strategy was selected for describing the processes. This strategy increases the decomposition of the process from general to specific, thus creating an insight into the elements of the subsystem. At the highest level, a system overview is defined without the introduction of process details. Each subsequent level introduces more details, or processes, as long as the level specification is not reduced to basic processes or activities. The standardization of specific processes for all cadastres in the world is impossible due to the large differences in the way in which certain procedures are implemented from one country to another. However, the first three levels of the described hierarchical process group division can be applied to cadastral transactions in general. The processes in cadastre can be implemented using the technology of Web services in a Service Oriented Architecture (SOA). The technology of Web services supports automated integration of systems of independent organizations and are in wide use for that purpose. Having this in mind, in this paper we first present a developed model for the processes in Serbian cadastre and then extend it to support data maintenance and transactions in 3D cadastre, including registration and update of 3D spatial units. Considering the ongoing projects in the world on integration of geospatial information with indoor spatial information and building information modeling, we explore the possibilities of implementation of 3D information in the SOA environment. If the information about 3D legal spaces is encoded using buildingSMART openBIM standards we explore the possibility of using these standards including BIM Collaboration Format (BCF), an XML schema and RESTful web service for the exchange of data which is shown on the selected case study of a typical building.","OpenBIM; Processes in cadastre; 3D cadastre; Web services; SOA","en","conference paper","","","","","","","","","","","","","",""
"uuid:5caa0f91-4db2-4201-9f0f-594f64c46e18","http://resolver.tudelft.nl/uuid:5caa0f91-4db2-4201-9f0f-594f64c46e18","A study on amplitude transmission in ultrasonic welding of thermoplastic composites","Palardy, Genevieve (Louisiana State University); Shi, H. (TU Delft Structural Integrity & Composites); Levy, Arthur (Laboratoire de Thermocinétique de Nantes); Le Corre, Steven (Laboratoire de Thermocinétique de Nantes); Villegas, I.F. (TU Delft Structural Integrity & Composites)","","2018","Ultrasonic welding of thermoplastic composite materials is a promising joining technique that is now moving towards up-scaling, i.e. the assembling of large industrial parts. Despite its growing technological maturation, the assumed physical mechanisms underlying ultrasonic heating (viscoelastic heating, friction) are still insufficiently understood and modelled. In particular, the hammering phenomenon, resulting from the periodic loss of contact between the sonotrode and adherends due to the high frequency vibration caused to the former, directly impacts the heating efficiency. We propose in this work an original experimental and modelling approach towards a better understanding of the hammering effect. This approach makes combined use of: (i) an experimental static welding setup provided with a high-frequency laser sensor to analyse the vibration amplitude transmitted to the adherends and (ii) an improvement of the multiphysical finite element model already presented in previous works. Results show it is possible to obtain a good estimation of the vibration transmitted to the upper adherend from laser measurements close to the sonotrode. The hammering effect is shown to decrease during the welding process, due to the heating of the interface which directly affects further heat generation. Quantitative introduction of this hammering effect in the existing numerical model results in improved predictions in terms of dissipated power in time.","A. Thermoplastic resin; B. Vibration; C. Process Modelling; E. Joints/Joining","en","journal article","","","","","","","","2020-08-10","","","Structural Integrity & Composites","","",""
"uuid:74cf2257-b17d-42ac-b70f-e6883a77c7d1","http://resolver.tudelft.nl/uuid:74cf2257-b17d-42ac-b70f-e6883a77c7d1","Ice edge failure process and modelling ice pressure","Riska, K.A. (TU Delft Offshore Engineering)","","2018","Ice action on ships and offshore structures is commonly determined by calculating the contact ice pressure. The aim of this paper is to describe the empirical background for determining the ice pressure. This review article describes six different test series where ice edge indentation and contact ice pressure have been investigated. These test series are ice pressure measurements onboard IB Sisu in the Baltic in 1977, pendulum tests carried out at Arctec in Ottawa, Canada, in 1979, laboratory and full scale ice crushing tests at WARC in 1988 and onboard IB Sampo 1989, medium scale indentation tests on Hobson's Choice Ice Island 1990, ice crushing tests at NRC, Ottawa 1992 and the JOIA tests in Hokkaido 1996-1999. These tests were selected as at each series a new phenomenon was observed. The aim of the paper is to introduce the main features for ice-structure contact empirically through the description of tests. The paper is concluded with a short description of the existing models for ice pressure, especially to gain an insight and highlight the main observations in each test series and how the models for ice pressure have developed based on the observations.","Ice crushing process; Ice failure; Ice pressure; Ice strength; Ice-structure contact","en","review","","","","","","","","","","","Offshore Engineering","","",""
"uuid:75e75d2f-68fe-4fa0-9f33-43bf2b7cfc89","http://resolver.tudelft.nl/uuid:75e75d2f-68fe-4fa0-9f33-43bf2b7cfc89","Advanced Techniques to Process Differential Phase Measurements for Polarimetric X-band Weather Radars","Reinoso Rondinel, R. (TU Delft Atmospheric Remote Sensing; TU Delft Geoscience and Remote Sensing)","Russchenberg, H.W.J. (promotor); Delft University of Technology (degree granting institution)","2018","Observations of weather phenomena have attracted many researchers because of their microphysical complexity, space-time variability, and more important, their impact on human life. In the efforts of studying weather, researchers have used a diverse number of instruments to obtain both in-situ (towers, tethered balloons, and weather station networks) and remote (radar, lidar, satellite) measurements. In this study, weather measurements are obtained using ground-based weather radars, which are able to scan over a large space domain, acquiring data from scanned hydrometeor targets, such as groups of rain and ice particles. Radar measurements require complex processes to extract reliable information that can be used by weather institutions, companies, and citizens. In this thesis, innovative methods are presented to process weather radar measurements, acquired at X-band frequencies and using polarimetric technology, with the aim of capturing the natural variability of storm events.","Weather radar signal processing; Rainfall; Optimization Method","en","doctoral thesis","","978-94-6366-063-1","","","","","","","","Geoscience and Remote Sensing","Atmospheric Remote Sensing","","",""
"uuid:928b9f13-50d5-4274-a4d7-575604594cad","http://resolver.tudelft.nl/uuid:928b9f13-50d5-4274-a4d7-575604594cad","Acid phosphatase behaviour at an electrified soft junction and its interfacial co-deposition with silica","Poltorak, L. (TU Delft OLD ChemE/Organic Materials and Interfaces); van der Meijden, N. (TU Delft Applied Sciences); Oonk, S. (Netherlands Forensic Institute - NFI); Sudhölter, Ernst J. R. (TU Delft OLD ChemE/Organic Materials and Interfaces); de Puit, M. (TU Delft OLD ChemE/Organic Materials and Interfaces; Netherlands Forensic Institute - NFI)","","2018","The behaviour of acid phosphatase at an electrified liquid–liquid interface was studied in this work. It was found that only the protonated form of the protein can undergo interfacial adsorption which is affected by the pH of the aqueous phase. With ion transfer voltammetry we could detect acid phosphatase in concentrations as low as 0.1 μM. We were able to co-deposit the protein and silica at the electrified liquid–liquid interface via controlled proton transfer to the organic phase where it catalyzed tetraethoxysilane hydrolysis, followed by polycondensation to silica.","Acid phosphatase; Interfacial deposition; ITIES; Proteins; Sol-gel process of silica","en","journal article","","","","","","","","2020-07-31","Applied Sciences","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:7f20c3ee-b6c4-4cd1-9515-29370b9f37e5","http://resolver.tudelft.nl/uuid:7f20c3ee-b6c4-4cd1-9515-29370b9f37e5","Stochastic multi-objective optimisation of the cure process of thick laminates","Tifkitsis, K. I. (Cranfield University); Mesogitis, T. S. (National Composites Centre); Struzziero, G. (TU Delft Aerospace Manufacturing Technologies); Skordos, A. A. (Cranfield University)","","2018","A stochastic multi-objective cure optimisation methodology is developed in this work and applied to the case of thick epoxy/carbon fibre laminates. The methodology takes into account the uncertainty in process parameters and boundary conditions and minimises the mean values and standard deviations of cure time and temperature overshoot. Kriging is utilised to construct a surrogate model of the cure substituting Finite Element (FE) simulation for computational efficiency reasons. The surrogate model is coupled with Monte Carlo and integrated into a stochastic multi-objective optimisation framework based on Genetic Algorithms. The results show a significant reduction of about 40% in temperature overshoot and cure time compared to standard cure profiles. This reduction is accompanied by a reduction in variability by about 20% for both objectives. This highlights the opportunity of replacing conventional cure schedules with optimised profiles achieving significant improvement in both process efficiency and robustness.","A. Carbon fibre; A. Thermosetting resin; C. Process simulation; E. Cure","en","journal article","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:e0e7eed5-3671-469b-bd43-52dcf0f05bd8","http://resolver.tudelft.nl/uuid:e0e7eed5-3671-469b-bd43-52dcf0f05bd8","A parallel N-dimensional Space-Filling Curve library and its application in massive point cloud management","Guan, X. (Wuhan University); van Oosterom, P.J.M. (TU Delft OLD Department of GIS Technology); Cheng, Bo (Wuhan University)","","2018","Because of their locality preservation properties, Space-Filling Curves (SFC) have been widely used in massive point dataset management. However, the completeness, universality, and scalability of current SFC implementations are still not well resolved. To address this problem, a generic n-dimensional (nD) SFC library is proposed and validated in massive multiscale nD points management. The library supports two well-known types of SFCs (Morton and Hilbert) with an object-oriented design, and provides common interfaces for encoding, decoding, and nD box query. Parallel implementation permits effective exploitation of underlying multicore resources. During massive point cloud management, all xyz points are attached an additional random level of detail (LOD) value l. A unique 4D SFC key is generated from each xyzl with this library, and then only the keys are stored as flat records in an Oracle Index Organized Table (IOT). The key-only schema benefits both data compression and multiscale clustering. Experiments show that the proposed nD SFC library provides complete functions and robust scalability for massive points management. When loading 23 billion Light Detection and Ranging (LiDAR) points into an Oracle database, the parallel mode takes about 10 h and the loading speed is estimated four times faster than sequential loading. Furthermore, 4D queries using the Hilbert keys take about 1∼5 s and scale well with the dataset size.","Level of detail; Parallel processing; Point clouds; Space-filling curve","en","journal article","","","","","","","","","","","OLD Department of GIS Technology","","",""
"uuid:989ce975-a30f-4e8d-8c19-a5938d957ced","http://resolver.tudelft.nl/uuid:989ce975-a30f-4e8d-8c19-a5938d957ced","Preparation of bio-bitumen by bio-oil based on free radical polymerization and production process optimization","Sun, Z. (TU Delft Pavement Engineering; Harbin Institute of Technology); Yi, Junyan (Harbin Institute of Technology); Feng, Decheng (Harbin Institute of Technology); Kasbergen, C. (TU Delft Pavement Engineering); Scarpas, Athanasios (TU Delft Pavement Engineering); Zhu, Yiming (Harbin Institute of Technology)","","2018","Bio-oil produced during the production of biodiesel is a burden to the environment. Recycling and utilization of bio-oil as a substitute for pavement bitumen can help to build an environmentally-friendly and clean infrastructure. In this study, the bio-bitumen was prepared by bio-oil based on free radical polymerization. Different kinds of bio-bitumen products were produced by reacting bio-oil with an initiator and an accelerator solution at different reaction conditions. The orthogonal experimental method was employed to determine the optimal bio-bitumen production process by evaluating the indices of viscosity, rutting factors and fatigue factors. The test results show that the optimal mass proportions of bio-oil:initiator:accelerator solution is 100:1:2. Materials with these mass proportions should react at 100 °C for 2 h to yield the best bio-bitumen product. This kind of bio-bitumen product can be considered as a promising substitute for traditional petroleum bitumen.","Bio-bitumen; Bio-oil; Free radical polymerization; Production process optimization; Waste cooking oil","en","journal article","","","","","","","","2020-04-12","","","Pavement Engineering","","",""
"uuid:38816a60-318f-42b4-95ee-0ab76b47881d","http://resolver.tudelft.nl/uuid:38816a60-318f-42b4-95ee-0ab76b47881d","Reservoir Lithology Determination by Hidden Markov Random Fields Based on a Gaussian Mixture Model","Feng, R. (TU Delft Applied Geology); Luthi, S.M. (TU Delft Applied Geology); Gisolf, A. (TU Delft ImPhys/Acoustical Wavefield Imaging); Angerer, Erika (OMV Exploration & Production)","","2018","In this paper, geological prior information is incorporated in the classification of reservoir lithologies after the adoption of Markov random fields (MRFs). The prediction of hidden lithologies is based on measured observations, such as seismic inversion results, which are associated with the latent categorical variables, based on the assumption of Gaussian distributions. Compared with other statistical methods, such as the Gaussian mixture model or k-Means, which do not take spatial relationships into account, the hidden MRFs approach can connect the same or similar lithologies horizontally while ensuring a geologically reasonable vertical ordering. It is, therefore, able to exclude randomly appearing lithologies caused by errors in the inversion. The prior information consists of a Gibbs distribution function and transition probability matrices. The Gibbs distribution connects the same or similar lithologies internally, which does not need a geological definition from the outside. The transition matrices provide preferential transitions between different lithologies, and an estimation of them implicitly depends on the depositional environments and juxtaposition rules between different lithologies. Analog cross sections from the subsurface or outcrop studies can contribute to the construction of these matrices by a simple counting procedure.","Bayes methods; Gaussian mixture model; Hidden Markov models; Hidden Markov random fields (HMRFs); lithology determination; Markov processes; Reservoirs; Rocks; seismic inversion; transition matrix.","en","journal article","","","","","","","","","","","Applied Geology","","",""
"uuid:7f301bbd-c8a1-4614-8bb0-2c17f9e39dd6","http://resolver.tudelft.nl/uuid:7f301bbd-c8a1-4614-8bb0-2c17f9e39dd6","Acoustically effective facade","Krimm, J. (TU Delft Design of Constrution)","Knaack, U. (promotor); Techen, Holger (promotor); Klein, T. (promotor); Delft University of Technology (degree granting institution)","2018","Today’s city centres in European metropolitan areas are comprised of facades made of steel, glass and stone. These hard reflective facades are amplifying the perception of noise sources by human ears in their vicinity. Up to now in building designs this effect is neglected. Thus the number of people harmed by noise is increasing with the increasing noise levels on the streets caused by more and more hard reflective facades. To obtain control on urban acoustic spaces the focus of architects and engineers must be shifted to acoustics parameters. Several case studies in course of this research give evidence for the possibility of controlling the impact of noise sources on an urban space with modified facades. The experience and results of the case studies were merged to deliver a plot of a process chart for implementing the acoustical point of view in a building design process. Laboratory methods e.g. scale model measurements and impedance measurements were modified in order to be feasible in a building or facade design process. As with modified reflection properties of facade surfaces a sound reduction of up to 8 dB for specific frequency bands is feasible the building of quieter cities is in the responsibility of architects and engineers.","Urban acoustic; facades; building design; Noise control; Design Process","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-052-5","","","","A+BE | Architecture and the Built Environment No. 16 (2018)","","","","","Design of Constrution","","",""
"uuid:44a3f674-f8e0-4414-a92e-fe1dd7139f96","http://resolver.tudelft.nl/uuid:44a3f674-f8e0-4414-a92e-fe1dd7139f96","Denoising controlled-source electromagnetic data using least-squares inversion","Yang, Yang (Central South University China; Shandong University); Li, Diquan (Central South University China); Tong, Tiegang (Central South University China); Zhang, D. (TU Delft ImPhys/Acoustical Wavefield Imaging); Zhou, Yatong (Hebei University of Technology); Chen, Yangkang (Zhejiang University)","","2018","Strong noise is one of the toughest problems in the controlled-source electromagnetic (CSEM) method, which highly affects the quality of recorded data. The three main types of noise existing in CSEM data are periodic noise, Gaussian white noise, and nonperiodic noise, among which the nonperiodic noise is thought to be the most difficult to remove. We have developed a novel and effective method for removing such nonperiodic noise by formulating an inverse problem that is based on inverse discrete Fourier transform and several time windows in which only Gaussian white noise exists. These critical locations, which we call reconstruction locations, can be found by taking advantage of the continuous wavelet transform (CWT) and the temporal derivative of the scalogram generated by CWT. The coefficients of the nonperiodic noise are first estimated using the new least-squares method, and then they are subtracted from the coefficients of the raw data to produce denoised data. Together with the nonperiodic noise, we also remove Gaussian noise using the proposed method. We validate the methodology using real-world CSEM data.","Electromagnetics; Least-squares; Noise; Signal processing; Wavelet","en","journal article","","","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:0816cbe5-4e42-4fd3-a328-4775c5ccb633","http://resolver.tudelft.nl/uuid:0816cbe5-4e42-4fd3-a328-4775c5ccb633","Impact of sand nourishments on hydrodynamics and swimmer safety","Radermacher, M. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Reniers, A.J.H.M. (promotor); de Schipper, M.A. (copromotor); Delft University of Technology (degree granting institution)","2018","Artificial sand nourishments are a common measure to mitigate coastal erosion problems. Such nourishments can have an impact on currents and waves near the beach, especially when the nourishment has a large size. As nourished beaches often have a recreational function, these altered wave and current patterns may pose a threat to swimmers. This study has investigated the impact of nourishments on currents, waves and swimmer safety. De Sand Motor, an experimental large-scale nourishment at the Dutch coastline south of The Hague, served as a central case study. Using a combination of current measurements at sea and computer models, this study has revealed several interesting flow patterns around the Sand Motor, amongst others the presence of large eddies in the tidal flow. To determine the impact of such flow patterns on swimmer safety, the presence and spatial spreading of beach users at the Sand Motor was monitored with a set of cameras. Although the tidal eddies have a clear influence on currents and sand transport around the Sand Motor, their impact on swimmer safety remains limited. At the part of the Sand Motor where hazardous currents due to tidal eddies may occur, hardly any beach users are present due to the large distance from beach entrances, parking lots and restaurants. The most significant hazard is formed by tidal currents in the artificial lagoon, which has been incorporated in the initial design of the Sand Motor. Especially in the first years after construction of the nourishment, currents in the channel connecting the lagoon to the North Sea were quite strong, while that part of the Sand Motor can be crowded on nice summer days. The findings of this study enable engineers to incorporate swimmer safety considerations in the design of future nourishments. Furthermore, more fundamental insights into waves and currents around the Sand Motor contribute to the understanding of sediment transport, coastal erosion and eventually prevention of coastal flooding.","swimmer safety; sand nourishments; coastal processes; Sand Motor","en","doctoral thesis","","978-94-028-1065-3","","","","","","","","","Coastal Engineering","","",""
"uuid:ac299084-f546-480b-ac9b-16a0edf07b13","http://resolver.tudelft.nl/uuid:ac299084-f546-480b-ac9b-16a0edf07b13","Three-dimensional receiver deghosting of seismic streamer data using L1 inversion and redundant extended radon dictionary","Sun, Yimin (Aramco Overseas Company B.V.); Verschuur, D.J. (TU Delft ImPhys/Acoustical Wavefield Imaging)","","2018","In this paper, we propose a novel three-dimensional receiver deghosting algorithm that is capable of deghosting both horizontal and slanted streamer data in a theoretically consistent manner. Our algorithm honours wave propagation phenomena in a true three-dimensional sense and frames the three-dimensional receiver deghosting problem as a Lasso problem. The ultimate goal is to minimise the mismatch between the actual measurements and the simulated wavefield with an L1 constraint applied in
the extended Radon space to handle the underdetermined nature of this problem. We successfully demonstrate our algorithm on a modified three-dimensional EAGE/SEG
Overthrust model and a Red Sea marine dataset.","Data processing; Deghosting; Inversion","en","journal article","","","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:d36194de-1203-4342-bf20-8d03380c5b40","http://resolver.tudelft.nl/uuid:d36194de-1203-4342-bf20-8d03380c5b40","Managed aquifer recharge as a barrier for ozone-based advanced oxidation by-products: BrO3- and H2O2","Wang, F. (TU Delft Sanitary Engineering)","van der Hoek, J.P. (promotor); van Halem, D. (promotor); Delft University of Technology (degree granting institution)","2018","Managed Aquifer Recharge (MAR) is a technology that relies on soil passage - after pond infiltration - for water treatment. MAR is a proven technology for the removal of pathogenic micro-organisms, turbidity and a selection of specific organic micro-pollutions (OMPs). Nevertheless, removal of the wide variety of OMPs found in surface waters requires additional treatment. The application of O3-based advanced oxidation processes (AOPs) before MAR has been proposed as a smart solution, because previous studies have documented complementary and synergetic benefits for the removal of OMPs. However, the effect of the installation of O3-based AOP as a chemical process on the subsequent MAR as a biological process is not known yet. Especially the behaviour and fate of O3-based AOP by-products and residuals on MAR raise many questions. This thesis focused on the behaviour and fate of BrO3 - as an O3-based AOP by-product and
H2O2 as an AOP residual during MAR.","Managed aquifer recharge; Advanced oxidation processes; Bromate; Hydrogen peroxide; By-product; Iron; Denitrifying bacteria","en","doctoral thesis","","978-90-6562-422-2","","","","","","","","","Sanitary Engineering","","",""
"uuid:a16d6f4e-6e30-400f-a5d4-2bd6460fdb14","http://resolver.tudelft.nl/uuid:a16d6f4e-6e30-400f-a5d4-2bd6460fdb14","Application of Workflow Management System to the Modelling of Processes in Land Administration Systems","Vranić, Saša; Matijević, Hrvoje; Roić, Miodrag","","2018","Cadastral data are maintained through formally defined procedures which need to provide security and consistency. Databases and transactional model enable consistency but lack flexibility in modelling business processes and support for heterogenous IT environments (web services, various programs). Transactional workflow management systems (WFMS) provide flexibility and can provide consistency of data. Land administration domain model (LADM) provides an excellent basis for modelling static component of land administration systems, but doesn’t provide elements to model dynamic component, i.e. processes. In this paper we define conceptual model of a dynamic component of land administration system. We use the WFMS concept with integrated transactional support. Data model enables storing elements of Petri nets and it is divided to generic and extended. Generic model ensures consistency of processes on object level and is applicable on cadastral data generally. Extended model ensures consistency by spatially defining affected area of the process and it is used to model processes on cadastral parcels spatially represented by polygons. Modelling of processes and transactional support is achieved with Petri nets. Workflow elements enable ensuring consistency of a process in a pessimistic or optimistic manner. Pessimistic approach ensures consistency by locking objects affected by the process and optimistic approach leaves checking of concurrent changes until the very end of the process. Finally, we demonstrate how the devised model copes with a simple example of two separate processes where each wants to split one of two adjacent parcels, in a pessimistic manner.","Correctness; Workflow; LADM; Process; Transaction","en","conference paper","","","","","","","","","","","","","",""
"uuid:31e1dd88-4d43-4e8f-92fc-f39c61922dd4","http://resolver.tudelft.nl/uuid:31e1dd88-4d43-4e8f-92fc-f39c61922dd4","Adaptive and high-resolution estimation of specific differential phase for polarimetric X-band weather radars","Reinoso Rondinel, R. (TU Delft Atmospheric Remote Sensing); Unal, C.M.H. (TU Delft Atmospheric Remote Sensing); Russchenberg, H.W.J. (TU Delft Geoscience and Remote Sensing)","","2018","One of the most beneficial polarimetric variables may be the specific differential phase KDP because of its independence from power attenuation and radar miscalibration. However, conventional KDP estimation requires a substantial amount of range smoothing as a result of the noisy characteristic of the measured differential phase ψDP. In addition, the backscatter differential phase δhv component of ψDP, significant at C- and X-band frequency, may lead to inaccurate KDP estimates. In this work, an adaptive approach is proposed to obtain accurate KDP estimates in rain from noisy ψDP, whose δhv is of significance, at range resolution scales. This approach uses existing relations between polarimetric variables in rain to filter δhv from ψDP while maintaining its spatial variability. In addition, the standard deviation of the proposed KDP estimator is mathematically formulated for quality control. The adaptive approach is assessed using four storm events, associated with light and heavy rain, observed by a polarimetric X-band weather radar in the Netherlands. It is shown that this approach is able to retain the spatial variability of the storms at scales of the range resolution. Moreover, the performance of the proposed approach is compared with two different methods. The results of this comparison show that the proposed approach outperforms the other two methods in terms of the correlation between KDP and reflectivity, and KDP standard deviation reduction.","Data processing; Filtering techniques; Radars/Radar observations; Remote sensing; Weather radar signal processing","en","journal article","","","","","","","","2019-04-30","","Geoscience and Remote Sensing","Atmospheric Remote Sensing","","",""
"uuid:570f708e-5ac3-41fb-bdf5-9eda898da1e5","http://resolver.tudelft.nl/uuid:570f708e-5ac3-41fb-bdf5-9eda898da1e5","Single- and Double-Sided Marchenko Imaging Conditions in Acoustic Media","van der Neut, J.R. (TU Delft ImPhys/Acoustical Wavefield Imaging); Brackenhoff, Joeri (Student TU Delft); Staring, Myrna (Student TU Delft); Zhang, L. (TU Delft Applied Geophysics and Petrophysics); de Ridder, S.A.L. (TU Delft Applied Geophysics and Petrophysics); Slob, E.C. (TU Delft Applied Geophysics and Petrophysics); Wapenaar, C.P.A. (TU Delft ImPhys/Acoustical Wavefield Imaging; TU Delft Applied Geophysics and Petrophysics)","","2018","","Image representation; acoustic signal processing","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2022-02-02","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:8d3f5b68-ee79-4dd4-bc38-8a9a0e0ee3e0","http://resolver.tudelft.nl/uuid:8d3f5b68-ee79-4dd4-bc38-8a9a0e0ee3e0","Towards an On-line Characterisation of Kaolin Calcination Process Using Short-Wave Infrared Spectroscopy","Guatame-Garcia, Adriana (TU Delft Resource Engineering); Buxton, M.W.N. (TU Delft Resource Engineering); Deon, F. (TU Delft Resource Engineering); Lievens, Caroline (University of Twente); Hecker, Chris (University of Twente)","","2018","In the production of calcined kaolin, the on-line monitoring of the calcination reaction is becoming more relevant for the generation of optimal products. In this context, this study aimed to assess the suitability of using infrared (IR) spectroscopy as a potential technique for the on-line characterization of the
calcination of kaolin. The transformation of kaolin samples calcined at different temperatures were characterized in the short-wave (SWIR) spectra using the kaolinite crystallinity (Kx) index and the depth of the water spectral feature (1900D). A high correlation between the standard operational procedure
for the quality control of calcined kaolin and the Kx index was observed (r = -0.89), as well as with the 1900D parameter (r = -0.96). This study offers a new conceptual approach to the use of SWIR spectroscopy for the characterization the calcination of kaolin, withdrawing the need of using extensive laboratory techniques.","kaolinite; metakaolinite; gamma-alumina; calcined kaolin; SWIR-MWIR-LWIR spectroscopy; Process control","en","journal article","","","","","","","","","","","Resource Engineering","","",""
"uuid:b8240342-c6f0-49b4-90b4-2de4b35fd808","http://resolver.tudelft.nl/uuid:b8240342-c6f0-49b4-90b4-2de4b35fd808","Latest development in the synthesis of ursodeoxycholic acid (UDCA): A critical review","Tonin, F. (TU Delft BT/Biocatalysis); Arends, I.W.C.E. (TU Delft BT/Biotechnologie)","","2018","Ursodeoxycholic acid (UDCA) is a pharmaceutical ingredient widely used in clinics. As bile acid it solubilizes cholesterol gallstones and improves the liver function in case of cholestatic diseases. UDCA can be obtained from cholic acid (CA), which is the most abundant and least expensive bile acid available. The now available chemical routes for the obtainment of UDCA yield about 30% of final product. For these syntheses several protection and deprotection steps requiring toxic and dangerous reagents have to be performed, leading to the production of a series of waste products. In many cases the cholic acid itself first needs to be prepared from its taurinated and glycilated derivatives in the bile, thus adding to the complexity and multitude of steps involved of the synthetic process. For these reasons, several studies have been performed towards the development of microbial transformations or chemoenzymatic procedures for the synthesis of UDCA starting from CA or chenodeoxycholic acid (CDCA). This promising approach led several research groups to focus their attention on the development of biotransformations with non-pathogenic, easy-to-manage microorganisms, and their enzymes. In particular, the enzymatic reactions involved are selective hydrolysis, epimerization of the hydroxy functions (by oxidation and subsequent reduction) and the specific hydroxylation and dehydroxylation of suitable positions in the steroid rings. In this minireview, we critically analyze the state of the art of the production of UDCA by several chemical, chemoenzymatic and enzymatic routes reported, highlighting the bottlenecks of each production step. Particular attention is placed on the precursors availability as well as the substrate loading in the process. Potential new routes and recent developments are discussed, in particular on the employment of flow-reactors. The latter technology allows to develop processes with shorter reaction times and lower costs for the chemical and enzymatic reactions involved.","Bile acids; Biotransformation; Hydroxysteroid dehydrogenases; Production process; UDCA","en","review","","","","","","","","","","BT/Biotechnologie","BT/Biocatalysis","","",""
"uuid:18c61a8d-2256-4b2d-878f-0406e272e982","http://resolver.tudelft.nl/uuid:18c61a8d-2256-4b2d-878f-0406e272e982","Studying ice particle growth processes in mixed-phase clouds using spectral polarimetric radar measurements","Pfitzenmaier, L. (TU Delft Atmospheric Remote Sensing)","Russchenberg, H.W.J. (promotor); Delft University of Technology (degree granting institution)","2018","Clouds are a prominent part of the Earth hydrological cycle. In the mid latitudes, the ice phase of clouds is highly involved in the formation of precipitation. The ice particles in the clouds fall to earth either as snow flakes, in the winter month, or melting crystals that become rain drops. An efficient growth process is the interaction of ice crystals and supercooled liquid ater droplets in so called mixed-phase clouds. Mixed phase cloud systems contain both - ice crystals and super cooled cloud droplets - in the same volume of air. The interaction of ice and liquid phase leads to an enhanced growth of ice crystals and, therefore, enhances the amount of precipitation. However, such processes are still not fully understood. This work hows that such complex microphysical processes in mixed-phase clouds can be observed using state of the art ground based radar techniques. Analyzing spectral polarimetric radar data, different signatures of particle growth processes can be identified.
The results presented are based on measurements obtained with the Transportable Atmospheric Radar (TARA) during the ACCEPT campaign (Analysis of the Composition of Clouds with Extended Polarization Techniques), in autumn 2014, Cabauw, the Netherlands. TARA is an S-band radar profiler that has full Doppler and spectral polarimetric measurement capabilities. TARAs unique three-beam configuration is also able to retrieve the full 3-D velocity vector. Because the high temporal and spatial resolutions and its configurations TARA can capture the complexity of cloud dynamics and microphysical variabilities involved in mixed-phase cloud systems.
A new retrieval technique was applied to several case studies to qualitatively analyze ice particle growth processes within mixed phase cloud systems. These results demonstrate that using radar data re-arranged along fall streak, the interpretation of Doppler spectra and polarization parameters can improve. Based on synergetic measurements obtained during the ACCEPT campaign it was possible to detect possible to detect supercooled liquid water layers within the cloud system and relate them to TARA observations. Therefore, it was possible to even identify different growth processes, like particle riming, generation of the new particles, and particle diffusional growth within the TARA measurements. This demonstrates, that in order to observe ice particle growth processes within complex systems adequate radar technology and state of the art retrieval algorithms are required. Moreover, the ice particle growth processes within cloud systems can be linked directly to the increased rain intensities using along fall streak rearranged radar data.
The last objective of the thesis is the extension of the spectral polarimetric measurement capabilities of TARA and the estimate of the differential phase and the specific differential phase in the spectral domain. These two parameters are frequently used to improve rain estimation, hydrometeor classifications and, currently, more and more to improve microphysical process understanding, e.g. the onset of the aggregation of ice particles. So far, the parameters are used only as integrated moments. Nevertheless, the work demonstrates that further work has to be done to completely understand the microphysical information of these spectral resolved parameters.
Overall, this work demonstrates that spectral polarimetric radar data can be used to improve the microphysical process understanding. The presented work also shows that spectral polarimetric radar data can be used to estimate quantitative icrophysical
properties related to ice particle growth.","cloud physics; spectral radar measurements; radar polarimetry; ice particle growth processes; mixed phase clouds","en","doctoral thesis","","978-94-6186-884-8","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:a7261808-82a4-4753-86e8-f228b772b0cd","http://resolver.tudelft.nl/uuid:a7261808-82a4-4753-86e8-f228b772b0cd","Can I touch you online?: Reshaping Touch Communication: An Interdisciplinary Research Agenda.","Lancel, K.A. (TU Delft System Engineering); Brazier, F.M. (TU Delft System Engineering); Maat, Hermen (Art and science research studio Lancel/Maat)","Jewitt, Prof. Carey (editor); Price, Prof. Sara (editor); Leder Mackley, Prof. Kerstin (editor); Huisman, Dr. Gijs (editor); Petreca, Prof. Bruna (editor); Berthouze, Prof. Nadia (editor); Prattichizzo, Domenico (editor); Hayward, Prof. Vincent (editor)","2018","This paper introduces art and research on disrupted, touch in networked environments in which realities merge. Aesthetic sensory disruption and haptic distribution are purposefully designed for reflection, in a new type of ‘dialogue space’. The effects of embodied cognition, with respect to trust and experience, are explored in Artistic Social Labs (ASLs) designed to this purpose. Two ASLs are described in this paper.","Performance interactive art; on-line touch; shared reflection and expression; mirror process; data environment; synchronization; sensory disruption and distribution; aesthetics of interactive art; co-creation; engagement; trust; digital art; playful environment; posthuman social construct; embodiment","en","conference paper","CHI","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-05-01","","","System Engineering","","",""
"uuid:3201ef45-fa2e-47a6-8712-0ab75b34f823","http://resolver.tudelft.nl/uuid:3201ef45-fa2e-47a6-8712-0ab75b34f823","Panel: Removing the barriers for personal data management","Bharosa, Nitesh (TU Delft Information and Communication Technology); Luitjens, Steven (Ministry of the Interior & Kingdom Relations of the Netherlands); van Wijk, R. (Cleverbase); Pardo, Theresa (University at Albany - State University of New York)","Hinnant, Charles C. (editor); Zuiderwijk, Anneke (editor)","2018","In our data-driven society, both public and private organisations are struggling with issues regarding privacy and personal data. On the one hand, consumers are required to hand over more and more personal data in return for (free) online services. On the other hand, regulations increasingly demand data minimisation and informed consent. Personal data management is often proposed as a human centric design philosophy that should ultimately allow consumers to gain back control over, and insight in, the processing of personal data. This signals a transition from provider centric to human centric e-societies. The goal of this panel is to explore which roles government, business and knowledge institutes can play in order to enable personal data management. What can and should these parties do? And what should consumers-the users of online services-do?.","EIDs; General data protection act; Information processing; Personal data management","en","conference paper","Association for Computing Machinery (ACM)","","","","","","","","","","Information and Communication Technology","","",""
"uuid:f334c56e-6df1-499c-846d-6ea754f8d3e7","http://resolver.tudelft.nl/uuid:f334c56e-6df1-499c-846d-6ea754f8d3e7","Radar micro-Doppler of Wind Turbines: Low-Frequency Polarimetric Extension of Simplified Analytical Model","Wangkheimayum, Kajengkhombi (Student TU Delft); Krasnov, O.A. (TU Delft Microwave Sensing, Signals & Systems); Yarovoy, Alexander (TU Delft Microwave Sensing, Signals & Systems)","","2018","A simplified polarimetric model for radar signal scattering on wind turbines (WT) is proposed, which can be used for interpretation of observation in time and Doppler domains. This model uses representation of real WT as slowly rotating linear wire structures. The earlier proposed model has been extended to more general cases of WT with arbitrary orientation and full polarimetric observations in mono- and bi-static cases, which can be used for analysis of low-frequency signals scattering. It gives a possibility to estimate the influence of scattered on WT electromagnetic waves not only on radars but also on communication links.","Doppler radar; electromagnetic wave scattering; Radar polarimetry; Radar signal processing; Wind turbines; time domain; slowly rotating linear wire structure; low-frequency signals scattering; communication links; monostatic case; bistatic case; Doppler domains; radar signal scattering; simplified polarimetric model; simplified analytical model; Low-frequency polarimetric extension; radar microDoppler; Blades; Scattering; Wires; Doppler effect; polarimetric Doppler radar; micro-Doppler; Wave scattering","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-12-03","","","Microwave Sensing, Signals & Systems","","",""
"uuid:c901c0bb-a5ac-426b-af55-6644c731df38","http://resolver.tudelft.nl/uuid:c901c0bb-a5ac-426b-af55-6644c731df38","From Learners to Earners: Enabling MOOC Learners to Apply Their Skills and Earn Money in an Online Market Place","Chen, G. (TU Delft Web Information Systems); Davis, D.J. (TU Delft Web Information Systems); Krause, Markus (University of California); Aivaloglou, E.A. (TU Delft Software Engineering); Hauff, C. (TU Delft Web Information Systems); Houben, G.J.P.M. (TU Delft Web Information Systems)","","2018","Massive Open Online Courses (MOOCs) aim to educate the world. More often than not, however, MOOCs fall short of this goal — a majority of learners are already highly educated (with a Bachelor degree or more) and come from specific parts of the (developed) world. Learners from developing countries without a higher degree are underrepresented, though desired, in MOOCs. One reason for those learners to drop out of a course can be found in their financial realities and the subsequent limited amount of time they can dedicate to a course besides earning a living. If we could pay learners to take a MOOC, this hurdle would largely disappear. With MOOCS, this leads to the following fundamental challenge: How can learners be paid at scale? Ultimately, we envision a recommendation engine that recommends tasks from online market places such as Upwork or witmart to learners, that are relevant to the course content of the MOOC. In this manner, the learners learn and earn money. To investigate the feasibility of this vision, in this paper we explored to what extent (1) online market places contain tasks relevant to a specific MOOC, and (2) learners are able to solve real-world tasks correctly and with sufficient quality. Finally, based on our experimental design, we were also able to investigate the impact of real-world bonus tasks in a MOOC on the general learner population.","Data analysis; Uncertainty; Monitoring; Process control; Education; Engines; Sociology","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-12-01","","","Web Information Systems","","",""
"uuid:2ba24db2-7604-41cf-b492-c3306e65c52d","http://resolver.tudelft.nl/uuid:2ba24db2-7604-41cf-b492-c3306e65c52d","An investigation on the hardness and corrosion behavior of MWCNT/Mg composites and grain refined Mg","Saikrishna, N. (Bhabha Atomic Research Centre; Rajiv Gandhi University of Knowledge Technologies (AP-IIIT)); Reddy, G. Pradeep Kumar (Vignana Bharathi Institute of Technology); Munirathinam, B. (TU Delft (OLD) MSE-6; Indian Institute of Technology Madras); Dumpala, Ravikumar (Visvesvaraya National Institute of Technology (VNIT)); Jagannatham, M. (Bhabha Atomic Research Centre); Ratna Sunil, B. (Rajiv Gandhi University of Knowledge Technologies (AP-IIIT))","","2018","In the present work, multi walled carbon nanotubes (MWCNT) reinforced magnesium (Mg) matrix composite was fabricated by friction stir processing (FSP) with an aim to explore its mechanical and electrochemical behavior. Microstructural observations showed that the thickness of the produced composite layer was in the range of 2500 μm. FSP resulted uniform distribution of CNT near the surface while agglomerated layers in the subsurface. Grain refinement of Mg achieved by FSP improved the hardness but significant enhancement in the hardness value was observed for FSPed MWCNT/Mg composites. Potentiodynamic polarization studies revealed that the increase in corrosion current density was observed for MWCNT/Mg composite compared with grain refined Mg and pure Mg, implying the significance of secondary phase (MWCNT) in decreasing the corrosion resistance of the composite.","Basal texture; Corrosion resistance; Friction stir processing; Hardness; MWCNT/Mg composite","en","journal article","","","","","","","","","","","(OLD) MSE-6","","",""
"uuid:4814afad-9350-4d86-974a-77bf39b72586","http://resolver.tudelft.nl/uuid:4814afad-9350-4d86-974a-77bf39b72586","Blind Graph Topology Change Detection","Isufi, E. (TU Delft Signal Processing Systems); Mahabir, Ashvant S.U. (Student TU Delft); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","This letter investigates methods to detect graph topological changes without making any assumption on the nature of the change itself. To accomplish this, we merge recently developed tools in graph signal processing with matched subspace detection theory and propose two blind topology change detectors. The first detector exploits the prior information that the observed signal is sparse w.r.t. the graph Fourier transform of the nominal graph, while the second makes use of the smoothness prior w.r.t. the nominal graph to detect topological changes. Both detectors are compared with their respective nonblind counterparts in a synthetic scenario that mimics brain networks. The absence of information about the alternative graph, in some cases, might heavily influence the blind detector's performance. However, in cases where the observed signal deviates slightly from the nonblind model, the information about the alternative graph turns out to be not useful.","Anomalous subgraph detection; brain networks; graph detection; graph signal processing; matched subspace detection","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2018-11-30","","","Signal Processing Systems","","",""
"uuid:c5354471-debc-4843-ad59-bf5370ec69c9","http://resolver.tudelft.nl/uuid:c5354471-debc-4843-ad59-bf5370ec69c9","Iterative reflectivity-constrained velocity estimation for seismic imaging","Masaya, S. (TU Delft ImPhys/Acoustical Wavefield Imaging); Eric Verschuur, D. J.","","2018","This paper proposes a reflectivity constraint for velocity estimation to optimally solve the inverse problem for active seismic imaging. This constraint is based on the velocity model derived from the definition of reflectivity and acoustic impedance. The constraint does not require any prior information of the subsurface and large extra computational costs, like the calculation of so-called Hessian matrices. We incorporate this constraint into the joint migration inversion algorithm, which simultaneously estimates both the reflectivity and velocity model of the subsurface in an iterative process. Using so-called full wavefield modelling, the misfit between forward modelled and measured data is minimized. Numerical and field data examples are given to demonstrate the validity of our proposed algorithm in case accurate initial models and the low-frequency components of observed seismic data are absent.","Image processing; Inverse theory; Seismic tomography; Waveform inversion","en","journal article","","","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:98a60e0a-58de-4af5-b23f-5f603d699839","http://resolver.tudelft.nl/uuid:98a60e0a-58de-4af5-b23f-5f603d699839","Autoregressive moving average graph filter design","Liu, J. (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","In graph signal processing, signals are processed by explicitly taking into account their underlying structure, which is generally characterized by a graph. In this field, graph filters play a major role to process such signals in the so-called graph frequency domain. In this paper, we focus on the design of autoregressive moving average (ARMA) graph filters and basically present two design approaches. The first approach is inspired by Prony's method, which considers a modified error between the modeled and the desired frequency response. The second approach is based on an iterative method, which finds the filter coefficients by iteratively minimizing the true error (instead of the modified error) between the modeled and the desired frequency response. The performance of the proposed design algorithms is evaluated and compared with finite impulse response (FIR) graph filters. The obtained results show that ARMA filters outperform FIR filters in terms of approximation accuracy even for the same computational cost.","Finite impulse response filters; Frequency response; Frequency-domain analysis; Autoregressive processes; Laplace equations; Matrix decomposition","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-11-25","","","Signal Processing Systems","","",""
"uuid:0289262a-bc71-45ce-8d8d-0a9ead34a410","http://resolver.tudelft.nl/uuid:0289262a-bc71-45ce-8d8d-0a9ead34a410","Transducer Placement Option of Lamb Wave SHM System for Hotspot Damage Monitoring","Ewald, Vincentius (TU Delft Structural Integrity & Composites); Groves, R.M. (TU Delft Structural Integrity & Composites); Benedictus, R. (TU Delft Structural Integrity & Composites)","","2018","In this paper, we investigated transducer placement strategies for detecting cracks in primary aircraft structures using ultrasonic Structural Health Monitoring (SHM). The approach developed is for an expected damage location based on fracture mechanics, for example fatigue crack growth in a high stress location. To assess the performance of the developed approach, finite-element (FE) modelling of a damage-tolerant aluminum fuselage has been performed by introducing an artificial crack at a rivet hole into the structural FE model and assessing its influence on the Lamb wave propagation, compared to a baseline measurement simulation. The efficient practical sensor position was determined from the largest change in area that is covered by reflected and missing wave scatter using an additive color model. Blob detection algorithms were employed to determine the boundaries of this area and to calculate the blob centroid. To demonstrate that the technique can be generalized, the results from different crack lengths and from tilted crack are also presented.","sensor placement option; hotspot damage; Lamb wave; Structural Health Monitoring (SHM); finite element modelling; image processing; additive color model","en","journal article","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:ff88f949-3417-467e-a1de-cc5af126982a","http://resolver.tudelft.nl/uuid:ff88f949-3417-467e-a1de-cc5af126982a","Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies","Di Lorenzo, Paolo (Sapienza University of Rome); Banelli, Paolo (University of Perugia); Isufi, E. (TU Delft Signal Processing Systems; University of Perugia); Barbarossa, Sergio (Sapienza University of Rome); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","The goal of this paper is to propose novel strategies for adaptive learning of signals defined over graphs, which are observed over a (randomly) time-varying subset of vertices. We recast two classical adaptive algorithms in the graph signal processing framework, namely, the least mean squares (LMS) and the recursive least squares (RLS) adaptive estimation strategies. For both methods, a detailed mean-square analysis illustrates the effect of random sampling on the adaptive reconstruction capability and the steady-state performance. Then, several probabilistic sampling strategies are proposed to design the sampling probability at each node in the graph, with the aim of optimizing the tradeoff between steady-state performance, graph sampling rate, and convergence rate of the adaptive algorithms. Finally, a distributed RLS strategy is derived and is shown to be convergent to its centralized counterpart. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.","Adaptation and learning; Adaptive learning; graph signal processing; Laplace equations; sampling on graphs; Signal processing; Signal processing algorithms; Steady-state; successive convex approximation; Task analysis; Tools","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-01-01","","","Signal Processing Systems","","",""
"uuid:797864cd-050e-4a30-b269-3602f51b579a","http://resolver.tudelft.nl/uuid:797864cd-050e-4a30-b269-3602f51b579a","Morphodynamic impacts of large-scale engineering projects in the Yangtze River delta","Luan, Hualong (East China Normal University; Changjiang River Scientific Research Institute (CRSRI)); Ding, P (East China Normal University); Wang, Zhengbing (TU Delft Coastal Engineering; East China Normal University; Deltares); Yang, S.L. (East China Normal University); Lu, Jin You (Changjiang River Scientific Research Institute (CRSRI))","","2018","Morphodynamics of world's river deltas are increasingly affected by human activities, which are of great ecological, economic and social implications. However, impacts of human interventions in deltaic regions are insufficiently
understood, especially superimposed upon diminishing sediment supplies. This study uses the heavily interfered Yangtze River delta as an example to address this issue. The morphodynamic impacts of the Deepwater Navigation Channel Project (DNCP) during 1997–2013 are investigated through process-based
modeling approach (Delft3D) and bathymetric data analysis. The DNCP was implemented in the mouth bar area of the Yangtze River delta including the twin dikes and 19 groynes with the total length of 132.0 km. Hydrodynamic simulations indicate that the training walls resulted in weaker tidal flow and longer slack period at the East Hengsha Shoal (EHS) and stronger tidal flow at the subaqueous delta. Thus, the EHS is characterized as a sediment accumulation zone after the completion of the training walls. Subsequently, morphological
modeling shows enhanced accretion at the EHS and enhanced erosion at the subaqueous delta when the training walls are taken into account. Numerical experiments further demonstrate that the above changes are mainly attributed to the seaward half of the northern training walls constructed in 2002–2005. This is probably the reason for the observed accretion peak of the EHS in 2002–2007 and the gradual increase in the erosion rate of the subaqueous delta after 2002. The schematized paths of sediment transport after the DNCP indicate that
sediment eroded from the subaqueous delta serves as an important source for accretion of the mouth bar area. It is suggested that siltation promoting projects within the mouth bar area increased shallow shoal accretion and aggravated erosion at the subaqueous delta. With the overall erosion of the Yangtze River delta due to river sediment reduction, large-scale estuarine engineering projects substantially increase the complicacy of its morphodynamic pattern, which merits close attention for sustainable delta management.","Morphodynamics; Estuarine engineering projects; Process-based modeling; Yangtze river delta","en","journal article","","","","","","","","2020-09-03","","","Coastal Engineering","","",""
"uuid:23f511af-e25f-4541-94e5-8604a27f6b4b","http://resolver.tudelft.nl/uuid:23f511af-e25f-4541-94e5-8604a27f6b4b","Design, development and validation of more realistic models for teaching breast examination","Veitch, D.E. (TU Delft Applied Ergonomics and Design); Bochner, Melissa (Royal Adelaide Hospital); Fellner, Lilian (Flinders University); Leigh, Christopher (University of Adelaide); Owen, Harry (Flinders University)","","2018","Our objective was to design, develop and validate better clinical breast examination (CBE) models addressing the deficiencies of previous models. Detailed research and a
methodological design approach led to the development of a new technique for creating lifelike models for teaching CBE. Six multi-layered breast models representing a range of normal human variation for durity (hardness/softness), nodularity (fibro-glandular tissue) and adiposity (fatty tissue) were developed and validated. Various construction materials, MRI scans, traditional casting and three-dimensional (3D) printing were used to build models with lifelike look and feel (biofidelic). The models realistic in anthropometry (size and shape), feel (durity and nodularity) and appearance (skin feel and colouring) – visual biofidelity enhances perception of feel
– incorporate anatomically correct layering of ribs, soft adipose tissue, nodularity and additional signs of breast disease, both benign and pathological.
These were validated by four breast surgeons who compared their feel alongside a sample of breast patients (N = 78). Models were rated as ‘undecided’, ‘similar’ or ‘very similar’ to 81% of patients for nodularity and 82% for durity. These are the first models to incorporate normal human variability and be validated with real patients. These novel
biofidelic models provide a standardized way of teaching health professionals normal from abnormal.","Medical simulation; clinical breast examination; design process; biofidelic manikin; medical teaching","en","journal article","","","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:bcfd2676-b422-4aa5-9e41-66f07a43282b","http://resolver.tudelft.nl/uuid:bcfd2676-b422-4aa5-9e41-66f07a43282b","PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging","Naghibzadeh, S. (TU Delft Signal Processing Systems); van der Veen, A.J. (TU Delft Signal Processing Systems)","","2018","Image formation in radio astronomy is a large-scale inverse problem that is inherently illposed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beam-formed image (which includes the classical 'dirty image') as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated 1D and 2D array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.","Methods: numerical; Methods: statistical; Techniques: image processing; Techniques: interferometric","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-12-06","","","Signal Processing Systems","","",""
"uuid:e1b7d6c3-b3fd-42a8-9969-1c8b1fc3e6b8","http://resolver.tudelft.nl/uuid:e1b7d6c3-b3fd-42a8-9969-1c8b1fc3e6b8","Information distances for radar resolution analysis","Pribić, Radmila (Thales Nederland B.V.); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","A stochastic approach to resolution based on information distances computed from the geometry of data models which is characterized by the Fisher information is explored. Stochastic resolution includes probability of resolution and signal-to-noise ratio (SNR). The probability of resolution is assessed from a hypothesis test by exploiting information distances in a likelihood ratio. Taking SNR into account is especially relevant in compressive sensing (CS) due to its fewer measurements. Based on this information-geometry approach, we demonstrate the stochastic resolution analysis in test cases from array processing. In addition, we also compare our stochastic resolution bounds with the actual resolution obtained numerically from sparse signal processing which nowadays is a major component of the back end of any CS sensor. Results demonstrate the suitability of the proposed stochastic resolution analysis due to its ability to include crucial features in the resolution performance guarantees: array configuration or sensor design, SNR, separation and probability of resolution.","array processing; compressive sensing; information geometry; likelihood ratio; radar; resolution","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-09-15","","","Signal Processing Systems","","",""
"uuid:e222c07a-023f-4287-afb2-d55e1e6d445c","http://resolver.tudelft.nl/uuid:e222c07a-023f-4287-afb2-d55e1e6d445c","From Abstract to Tangible: Supporting the Materialization of Experiential Visions with the Experience Map","Camere, S. (TU Delft Emerging Materials; Politecnico di Milano); Schifferstein, Hendrik N.J. (TU Delft Design Aesthetics); Bordegoni, Monica (Politecnico di Milano)","","2018","Designing for pleasurable and engaging product experiences requires an understanding of how users will experience the product, sometimes at a very abstract level. This focus on user experiences, rather than on the formal qualities of the product, might cause difficulties for designers in the materialization of design ideas. Designers need to navigate through several choices, shaping and refining the product qualities in order to elicit the intended experience. To support this process, we propose a tool, the Experience Map, guiding designers in the progressive transformation of an ‘experiential vision’ into tangible formal qualities, considering all the opportunities perceived by the different senses. The paper presents the results of two studies in which we verified the potential of the Experience Map, first in a workshop with design students and second in four design cases with professional designers. The results show that the Experience Map can provide a good structure to organize creative thoughts and progressively decrease the level of abstraction, particularly to support novice designers. It stimulates greater confidence and awareness of design decisions, while allowing the exploration of several design directions in parallel. These benefits, together with the visually stimulating layout and its ability to foster awareness on design decisions, make the Experience Map an effective tool to support experience-driven design practice, especially in the early phases of the creative process and in the educational context.","Multi Sensory Design; Design Process; Experience Design; Experience Map; Design Intentions","en","journal article","","","","","","","","","","","Emerging Materials","","",""
"uuid:fd1e3813-88f4-4c66-8794-1fcb489c035d","http://resolver.tudelft.nl/uuid:fd1e3813-88f4-4c66-8794-1fcb489c035d","Thermodynamic evaluation for reduction of iron oxide ore particles in a high temperature drop tube furnace","Chen, Z. (TU Delft (OLD) MSE-3); Qu, Yingxia (Northeastern University); Zeilstra, Christiaan (Tata Steel); van der Stel, Jan (Tata Steel); Sietsma, J. (TU Delft Materials Science and Engineering; TU Delft (OLD) MSE-3); Yang, Y. (TU Delft (OLD) MSE-3)","","2018","Melting and reduction of fine iron ore particles in the gas environment of a HIsarna smelting cyclone is a critically important topic, but very limited information is currently available except for some experimental data from high temperature drop tube furnace (HTDF). This work discusses the equilibrium state of reacting iron ore in the HTDF environment by thermodynamic calculations to strengthen the understanding of the HIsarna process. The limit of reduction termination of the ore particles was estimated in the calculation for the thermal decomposition and topochemical gas reduction. The theoretical calculation results are compared with the experimental data from the previous studies. Furthermore, variation of slag composition and iron valence states were estimated theoretically to understand the effects of post combustion ratio value and hydrogen/carbon ratio on the equilibrium state of the ore particles in the reducing gas.","gas–solid particle reduction; haematite ore; HIsarna process; thermodynamics","en","journal article","","","","","","","","","","Materials Science and Engineering","(OLD) MSE-3","","",""
"uuid:0cdae7e1-a0f6-443e-a37a-dd9057c2597c","http://resolver.tudelft.nl/uuid:0cdae7e1-a0f6-443e-a37a-dd9057c2597c","Analysis of Diffusion in Solid-State Electrolytes through MD Simulations, Improvement of the Li-Ion Conductivity in β‑Li3PS4 asan Example","de Klerk, N.J.J. (TU Delft RST/Storage of Electrochemical Energy); van der Maas, E.L.; Wagemaker, M. (TU Delft RST/Storage of Electrochemical Energy)","","2018","Molecular dynamics simulations are a powerful tool to study diffusion processes in battery electrolyte andelectrode materials. From molecular dynamics simulations, manyproperties relevant to diffusion can be obtained, including the
diffusion path, amplitude of vibrations, jump rates, radial distribution functions, and collective diffusion processes. Hereit is shown how the activation energies of different jumps and theattempt frequency can be obtained from a single moleculardynamics simulation. These detailed diffusion properties provide
a thorough understanding of diffusion in solid electrolytes, andprovide direction for the design of improved solid electrolytematerials. The presently developed analysis methodology isapplied to DFT MD simulations of Li-ion diffusion in β-Li3PS4.The methodology presented is generally applicable to diffusion in crystalline materials and facilitates the analysis of moleculardynamics simulations. The code used for the analysis is freely available at: https://bitbucket.org/niekdeklerk/md-analysis-withmatlab. The results on β−Li3PS4 demonstrate that jumps between bc planes limit the conductivity of this important class of solid electrolyte materials. The simulations indicate that the rate-limiting jump process can be accelerated significantly by adding Li interstitials or Li vacancies, promoting three-dimensional diffusion, which results in increased macroscopic Li-iondiffusivity. Li vacancies can be introduced through Br doping, which is predicted to result in an order of magnitude larger Li-ionconductivity in β−Li3PS4. Furthermore, the present simulations rationalize the improved Li-ion diffusivity upon O dopingthrough the change in Li distribution in the crystal. Thus, it is demonstrated how a thorough understanding of diffusion, based on thorough analysis of MD simulations, helps to gain insight and develop strategies to improve the ionic conductivity of solid electrolytes.
Link to 4TU.Centre for Research Data: https://doi.org/10.4121/uuid:bef54ab8-73ef-42f3-b6b7-54e011737e72","traffic; microphone; smartphone; sound analysis; sound processing","en","conference paper","Delft University of Technology","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2018-12-01","","","Intelligent Vehicles","","",""
"uuid:2f89be95-c0cf-4412-8aeb-4bf15406b590","http://resolver.tudelft.nl/uuid:2f89be95-c0cf-4412-8aeb-4bf15406b590","Internet of Things adoption for reconfiguring decision-making processes in asset management","Brous, P.A. (TU Delft Information and Communication Technology); Janssen, M.F.W.H.A. (TU Delft Information and Communication Technology); Herder, P.M. (TU Delft Energie and Industrie)","","2018","Purpose: Managers are increasingly looking to adopt the Internet of Things (IoT) to include the vast amount of big data generated in their decision-making processes. The use of IoT might yield many benefits for organizations engaged in civil infrastructure management, but these benefits might be difficult to realize as organizations are not equipped to handle and interpret this data. The purpose of this paper is to understand how IoT adoption affects decision-making processes. Design/methodology/approach: In this paper the changes in the business processes for managing civil infrastructure assets brought about by IoT adoption are analyzed by investigating two case studies within the water management domain. Propositions for effective IoT adoption in decision-making processes are derived. Findings: The results show that decision processes in civil infrastructure asset management have been transformed to deal with the real-time nature of the data. The authors found the need to make organizational and business process changes, development of new capabilities, data provenance and governance and the need for standardization. IoT can have a transformative effect on business processes. Research limitations/implications: Because of the chosen research approach, the research results may lack generalizability. Therefore, researchers are encouraged to test the propositions further. Practical implications: The paper shows that data provenance is necessary to be able to understand the value and the quality of the data often generated by various organizations. Managers need to adapt new capabilities to be able to interpret the data. Originality/value: This paper fulfills an identified need to understand how IoT adoption affects decision-making processes in asset management in order to be able to achieve expected benefits and mitigate risk.","Adoption; Asset management; Business process; Internet of Things; IoT","en","journal article","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:b59ae19f-a611-4df5-8601-16bcaf2e0ef1","http://resolver.tudelft.nl/uuid:b59ae19f-a611-4df5-8601-16bcaf2e0ef1","From infinite to finite programs: Explicit error bounds with applications to approximate dynamic programming","Mohajerin Esfahani, P. (TU Delft Team Tamas Keviczky); Sutter, Tobias (ETH Zürich); Kuhn, Daniel (Swiss Federal Institute of Technology); Lygeros, John (ETH Zürich)","","2018","We consider linear programming (LP) problems in infinite dimensional spaces that are in general computationally intractable. Under suitable assumptions, we develop an approximation bridge from the infinite dimensional LP to tractable finite convex programs in which the performance of the approximation is quantified explicitly. To this end, we adopt the recent developments in two areas of randomized optimization and first-order methods, leading to a priori as well as a posteriori performance guarantees. We illustrate the generality and implications of our theoretical results in the special case of the long-run average cost and discounted cost optimal control problems in the context of Markov decision processes on Borel spaces. The applicability of the theoretical results is demonstrated through a fisheries management problem.
P number-a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width-as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.","Coastal processes; Modeling; Nutrients; Ocean; River plumes","en","journal article","","","","","","","","2018-09-01","","","Atmospheric Remote Sensing","","",""
"uuid:7e03b12d-9885-49ba-a240-9a98e5b44e66","http://resolver.tudelft.nl/uuid:7e03b12d-9885-49ba-a240-9a98e5b44e66","An improved stress recovery technique for low-order 3D finite elements","Sharma, Rahul; Zhang, J. (TU Delft Computational Design and Mechanics); Langelaar, Matthijs (TU Delft Computational Design and Mechanics); van Keulen, A. (TU Delft Computational Design and Mechanics); Aragon, A.M. (TU Delft Computational Design and Mechanics)","","2018","In this paper, we propose a stress recovery procedure for low-order finite elements in 3D. For each finite element, the recovered stress field is obtained by satisfying equilibrium in an average sense and by projecting the directly calculated stress field onto a conveniently chosen space. Compared with existing recovery techniques, the current procedure gives more accurate stress fields, is simpler to implement, and can be applied to different types of elements without further modification. We demonstrate, through a set of examples in linear elasticity, that the recovered stresses converge at a higher rate than that of directly calculated stresses and that, in some cases, the rate of convergence is the same as that of the displacement field.","Finite element analysis; Low-order 3D finite elements; Post-processed stress; Stress convergence; Stress recovery","en","journal article","","","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:dab838c5-f9b7-4b4c-8085-1225fde754ff","http://resolver.tudelft.nl/uuid:dab838c5-f9b7-4b4c-8085-1225fde754ff","Roll-to-Roll Fabrication of Solution Processed Electronics","Abbel, Robert (TNO); Galagan, Yulia (TNO); Groen, W.A. (TU Delft Novel Aerospace Materials; TNO)","","2018","The production of electronic devices using solution based (“wet”) deposition technologies has some decisive technical and commercial advantages compared to competing approaches like vacuum based (“dry”) manufacturing. Particularly, the potential to scale up production processes to large areas and high volumes by introducing continuous roll-to-roll (R2R) methods on flexible substrates has been the topic of intense studies from both applied research institutes and industry already for some years. Decisive steps forward have been achieved during that time, resulting in the dawn of commercial applications for a number of processes, while additional development work is still needed in some other fields. This review summarizes the work published during the last few years on the R2R printing and wet coating of electronic devices. An overview is presented of the basic operational principles for the most commonly used R2R printing and coating methods and techniques for proper web handling in R2R lines. Then, the most commonly used types of flexible substrate materials are introduced, followed by a review of the work published in the application areas of transparent conductor materials, printed electric connections, light emitting devices, photovoltaic energy generation, printed logic, and sensing.","coating, roll-to-roll; high-volume production; large-scale manufacturing; printing; solution processed electronics","en","review","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-12-08","","","Novel Aerospace Materials","","",""
"uuid:513a82a0-179e-444d-baec-0105dd71d903","http://resolver.tudelft.nl/uuid:513a82a0-179e-444d-baec-0105dd71d903","Fourier multipliers and weak differential subordination of martingales in UMD Banach spaces","Yaroslavtsev, I.S. (TU Delft Analysis)","","2018","We introduce the notion of weak differential subordination for martingales, and show that a Banach space X is UMD if and only if for all p ∈ (1, ∞) and all purely discontinuous X-valued martingales M and N such that N is weakly differentially subordinated to M, one has the estimate E || N∞ ||p ≤ CpE|| M∞ ||p. As a corollary we derive a sharp estimate for the norms of a broad class of even Fourier multipliers, which includes e.g. the second order Riesz transforms.","Burkholder function; Differential subordination; Fourier multipliers; Hilbert transform; Lévy process; Purely discontinuous martingales; Sharp estimates; Stochastic integration; UMD Banach spaces; Weak differential subordination","en","journal article","","","","","","","","","","","Analysis","","",""
"uuid:6c614d0a-559a-4151-8a2e-c1e911ec4546","http://resolver.tudelft.nl/uuid:6c614d0a-559a-4151-8a2e-c1e911ec4546","Estimating the Cross-Shelf Export of Riverine Materials: Part 2. Estimates of Global Freshwater and Nutrient Export","Izett, J.G. (TU Delft Atmospheric Remote Sensing; Dalhousie University); Fennel, Katja (Dalhousie University)","","2018","Rivers deliver large amounts of fresh water, nutrients, and other terrestrially derived materials to the coastal ocean. Where inputs accumulate on the shelf, harmful effects such as hypoxia and eutrophication can result. In contrast, where export to the open ocean is efficient riverine inputs contribute to global biogeochemical budgets. Assessing the fate of riverine inputs is difficult on a global scale. Global ocean models are generally too coarse to resolve the relatively small scale features of river plumes. High-resolution regional models have been developed for individual river plume systems, but it is impractical to apply this approach globally to all rivers. Recently, generalized parameterizations have been proposed to estimate the export of riverine fresh water to the open ocean (Izett & Fennel, 2018, https://doi.org/10.1002/2017GB005667; Sharples et al., 2017, https://doi.org/10.1002/2016GB005483). Here the relationships of Izett and Fennel (), https://doi.org/10.1002/2017GB005667 are used to derive global estimates of open-ocean export of fresh water and dissolved inorganic silicate, dissolved organic carbon, and dissolved organic and inorganic phosphorus and nitrogen. We estimate that only 15-53% of riverine fresh water reaches the open ocean directly in river plumes; nutrient export is even less efficient because of processing on continental shelves. Due to geographic differences in riverine nutrient delivery, dissolved silicate is the most efficiently exported to the open ocean (7-56.7%), while dissolved inorganic nitrogen is the least efficiently exported (2.8-44.3%). These results are consistent with previous estimates and provide a simple way to parameterize export to the open ocean in global models.","Coastal processes; Global estimates; Nutrients; Ocean; River plumes","en","journal article","","","","","","","","2018-09-01","","","Atmospheric Remote Sensing","","",""
"uuid:ab9fa355-867d-4548-bf1d-330aab248af3","http://resolver.tudelft.nl/uuid:ab9fa355-867d-4548-bf1d-330aab248af3","Analysis of physical and cyber security-related events in the chemical and process industry","Casson Moreno, Valeria (University of Bologna); Reniers, G.L.L.M.E. (TU Delft Safety and Security Science); Salzano, Ernesto (University of Bologna); Cozzani, Valerio (University of Bologna)","","2018","Security threats are becoming an increasing concern for chemical sites and related infrastructures where relevant quantities of hazardous materials are processed, stored or transported. In the present study, security related events that affected chemical and process sites, and related infrastructures, were investigated. The aim of the study is to frame a clear picture of the threats affecting the chemical and process industry, and to issue lessons learnt from past events. A database of 300 security-related accidents was developed and populated, starting from European and American sources. Threat categories that caused such events were identified and analyzed. The attack modes were investigated. Important differences were found with respect to geographical areas and industrial sectors affected. The use of explosives (both military and improvised explosive devices) is by far the more frequent attack mode, although armed attacks and arson are also frequent events and may result in an in-depth penetration of the attackers. In recent years, cyber-attacks are also posing important threats. Lessons learnt call for the implementation of a specific security management system in the chemical and process industry, aiming at the physical and cyber protection of industrial sites.","Accidents; Attacks; Chemical and process industry; Cyber; Incidents; Security; Threat","en","journal article","","","","","","","","2020-03-23","","","Safety and Security Science","","",""
"uuid:fea51616-4a5b-44a4-a20d-409c6815ac75","http://resolver.tudelft.nl/uuid:fea51616-4a5b-44a4-a20d-409c6815ac75","Quantum dot solar cells: Small beginnings have large impacts","Ganesan, Abiseka Akash (Student TU Delft); Houtepen, A.J. (TU Delft ChemE/Opto-electronic Materials); Crisp, R.W. (TU Delft ChemE/Opto-electronic Materials)","","2018","From a niche field over 30 years ago, quantum dots (QDs) have developed into viable materials for many commercial optoelectronic devices. We discuss the advancements in Pb-based QD solar cells (QDSCs) from a viewpoint of the pathways an excited state can take when relaxing back to the ground state. Systematically understanding the fundamental processes occurring in QDs has led to improvements in solar cell efficiency from ~3% to over 13% in 8 years. We compile data from ~200 articles reporting functioning QDSCs to give an overview of the current limitations in the technology. We find that the open circuit voltage limits the device efficiency and propose some strategies for overcoming this limitation.","IV-VI semiconductors; Lead sulfide; Ligand-exchange; Nanocrystals; Photovoltaics; Quantum dots; Solar cells; Solution-processed","en","review","","","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:26bb1d3a-9992-4fb8-9bc0-1d3fd5c79c4b","http://resolver.tudelft.nl/uuid:26bb1d3a-9992-4fb8-9bc0-1d3fd5c79c4b","Path-space moderate deviation principles for the random field curie-weiss model","Collet, F. (TU Delft Applied Probability); Kraaij, R.C. (Ruhr-Universität Bochum)","","2018","We analyze the dynamics of moderate fluctuations for macroscopic observables of the random field Curie-Weiss model (i.e., standard Curie-Weiss model embedded in a site-dependent, i.i.d. random environment). We obtain path-space moderate deviation principles via a general analytic approach based on convergence of nonlinear generators and uniqueness of viscosity solutions for associated Hamilton-Jacobi equations. The moderate asymptotics depend crucially on the phase we consider and moreover, the space-time scale range for which fluctuations can be proven is restricted by the addition of the disorder.","Hamilton-jacobi equation; Interacting particle systems; Mean-field interaction; Moderate deviations; Perturbation theory for Markov processes; Quenched random environment","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:7a553a2f-d555-4049-a067-73e9a226a5ec","http://resolver.tudelft.nl/uuid:7a553a2f-d555-4049-a067-73e9a226a5ec","Numerical modelling of erosion rates, life span and maintenance volumes of mega nourishments","Tonnon, Pieter Koen (Deltares); Huisman, B.J.A. (TU Delft Coastal Engineering; Deltares); Stam, G. N.; van Rijn, L. C. (Leo van Rijn Sediment Consultancy)","","2018","Mega-nourishments, aiming at providing long-term coastal safety, nature qualities and recreational space, have been applied recently at the Holland coast and are considered at various other places in the world. Methods to quickly evaluate the potential and lifetime of these coastal mega nourishments are therefore very much desired, which is the main objective of this research. Two types of mega nourishments can be distinguished: feeder-type mega nourishments may erode freely to feed adjacent coasts for a more natural, dynamic dune growth while permanent mega-nourishments are designed to preserve safety levels and need to maintain their size and shape and thus needs to be nourished themselves. The design and impact assessment studies for both types of mega nourishments require detailed morphological studies to determine the morphological evolution. In this paper 2DH (Delft3D) and 1D (UNIBEST-CL+ and LONGMOR) numerical models were calibrated using data of the Sand Motor mega-nourishment and were then applied to model a series of mega-nourishments with various width over length ratios and volumes in order to derive relations and design graphs for erosion rates, life span and maintenance volumes. These relations and design graphs can be used in project initiation phases and feasibility studies. The magnitude of the modelled wave-driven longshore sediment transport rates in 1D coastline models depend on the representation of wave refraction on the lower shoreface, since a distinction should be made between the non-rotating lower shoreface and active surfzone. It was shown that the life time of nourishments is mainly determined by the dimensions of the nourishment and incoming wave energy.","Coastal morphodynamics; Delft3D; LONGMOR; Mega nourishment; Process-based modelling; UNIBEST","en","journal article","","","","","","","","2019-11-07","","","Coastal Engineering","","",""
"uuid:3091b118-e955-4727-9127-79a34d026cd8","http://resolver.tudelft.nl/uuid:3091b118-e955-4727-9127-79a34d026cd8","A tour of Marchenko redatuming: Focusing the subsurface wavefield","Cui, Tianci (Schlumberger Gould Research); Vasconcelos, Ivan (Universiteit Utrecht); Manen, Dirk Jan Van (Institute of Geophysics); Wapenaar, C.P.A. (TU Delft Applied Geophysics and Petrophysics)","","2018","Marchenko redatuming can retrieve the impulse response to a subsurface virtual source from the single-sided surface reflection data with limited knowledge of the medium. We illustrate the concepts and practical aspects of Marchenko redatuming on a simple 1D acoustic lossless medium in which the coupled Marchenko equations are exact. Defined in a truncated version of the actual medium, the Marchenko focusing functions focus the wavefields at the virtual source location and are responsible for the subsequent retrieval of the downgoing and upgoing components of the medium's impulse response. In real seismic exploration, where we have no access to the truncated medium, we solve the coupled Marchenko equations by iterative substitution, relying on the causality relations between the focusing functions and the desired Green's functions along with an initial estimate of the downgoing focusing function. We show that the amplitude accuracy of the initial focusing function influences that of the retrieved Green's functions. During each iteration, propagating an updated focusing function into the actual medium can be approximated by explicit convolution with the broadband reflection seismic data after appropriate processing, which acts as a proxy for the true medium's reflection response.","Acoustic; Autofocusing; Internal multiples; Processing","en","journal article","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:4e0bf139-e3f0-41f0-8df3-06cf39e0856c","http://resolver.tudelft.nl/uuid:4e0bf139-e3f0-41f0-8df3-06cf39e0856c","Three-Dimensional Sediment Dynamics in Well-Mixed Estuaries: Importance of the Internally Generated Overtide, Spatial Settling Lag, and Gravitational Circulation","Wei, X. (TU Delft Mathematical Physics; National Oceanography Center); Kumar, M. (TU Delft Mathematical Physics); Schuttelaars, H.M. (TU Delft Mathematical Physics)","","2018","To investigate the dominant sediment transport and trapping mechanisms, a semi-analytical three-dimensional model is developed resolving the dynamic effects of salt intrusion on sediment in well-mixed estuaries in morphodynamic equilibrium. As a study case, a schematized estuary with a converging width and a channel-shoal structure representative for the Delaware estuary is considered. When neglecting Coriolis effects, sediment downstream of the estuarine turbidity maximum (ETM) is imported into the estuary through the deeper channel and exported over the shoals. Within the ETM region, sediment is transported seaward through the deeper channel and transported landward over the shoals. The largest contribution to the cross-sectionally integrated seaward residual sediment transport is attributed to the advection of tidally averaged sediment concentrations by river-induced flow and tidal return flow. This contribution is mainly balanced by the residual landward sediment transport due to temporal correlations between the suspended sediment concentrations and velocities at the M2 tidal frequency. The M2 sediment concentration mainly results from spatial settling lag effects and asymmetric bed shear stresses due to interactions of M2 bottom velocities and the internally generated M4 tidal velocities, as well as the salinity-induced residual currents. Residual advection of tidally averaged sediment concentrations also plays an important role in the landward sediment transport. Including Coriolis effects hardly changes the cross-sectionally integrated sediment balance, but results in a landward (seaward) sediment transport on the right (left) side of the estuary looking seaward, consistent with observations from literature. The sediment transport/trapping mechanisms change significantly when varying the settling velocity and river discharge.","estuarine turbidity maximum (ETM); gravitational circulation; lateral processes; salt intrusion; sediment transport; well-mixed","en","journal article","","","","","","","","","","","Mathematical Physics","","",""
"uuid:500507b7-b9f2-41a9-816d-2f324ba9fb50","http://resolver.tudelft.nl/uuid:500507b7-b9f2-41a9-816d-2f324ba9fb50","Piecewise deterministic Markov processes for scalable Monte Carlo on restricted domains","Bierkens, G.N.J.C. (TU Delft Statistics); Bouchard-Côté, Alexandre (University of British Columbia); Doucet, Arnaud (University of Oxford); Duncan, Andrew B. (University of Sussex); Fearnhead, Paul (University of Lancaster); Lienart, Thibaut (University of Oxford); Roberts, Gareth (University of Warwick); Vollmer, Sebastian J. (University of Warwick)","","2018","Piecewise Deterministic Monte Carlo algorithms enable simulation from a posterior distribution, whilst only needing to access a sub-sample of data at each iteration. We show how they can be implemented in settings where the parameters live on a restricted domain.","Bayesian statistics; Logistic regression; MCMC; Piecewise deterministic Markov processes","en","journal article","","","","","","Accepted author manuscript","","2020-05-25","","","Statistics","","",""
"uuid:01e14698-7b9f-4c6f-8ed3-58658673099c","http://resolver.tudelft.nl/uuid:01e14698-7b9f-4c6f-8ed3-58658673099c","Impact of fuel selection on the environmental performance of post-combustion calcium looping applied to a cement plant","Schakel, Wouter (Universiteit Utrecht); Hung, Christine Roxanne (Norwegian University of Science and Technology (NTNU)); Tokheim, Lars Andre (University College of Southeast Norway); Strømman, Anders Hammer (Norwegian University of Science and Technology (NTNU)); Worrell, Ernst (Universiteit Utrecht); Ramirez, Andrea (TU Delft Energie and Industrie; Universiteit Utrecht)","","2018","Calcium looping CO2 capture is a promising technology to reduce CO2 emissions from cement production. Coal has been seen as a logical choice of fuel to drive the calcium looping process as coal is already the primary fuel used to produce cement. This study assesses the impact of using different fuels, namely coal, natural gas, woody biomass and a fuel mix (50% coal, 25% biomass and 25% animal meal), on the environmental performance of tail-end calcium looping applied to the clinker production at a cement plant in North-western Europe. Process modelling was applied to determine the impact of the different fuels on the mass and energy balance of the process which were subsequently used to carry out a life cycle assessment to evaluate the environmental performance of the different systems. Using natural gas, biomass or a fuel mix instead of coal in a tail-end calcium looping process can improve the efficiency of the process, as it decreases fuel, limestone and electricity consumption. Consequently, while coal-fired calcium looping can reduce the global warming potential (life cycle CO2 emissions) of clinker production by 75%, the use of natural gas further decreases these emissions (reduction of 86%) and biomass use could results in an almost carbon neutral (reduction of 95% in the fuel mix case) or net negative process (−104% reduction in the biomass case). Furthermore, replacing coal with natural gas or biomass reduces most other environmental impact categories as well, mostly due to avoided impacts from coal production. The level of improvement strongly depends on whether spent sorbent can be utilized in clinker production, and to what extent sequestered biogenic CO2 can reduce global warming potential. Overall, the results illustrate the potential of using alternative fuels to improve the environmental performance of tail-end calcium looping in the cement industry.","Biomass; Calcium looping; Cement plant; CO capture; LCA; Process modelling","en","journal article","","","","","","","","","","","Energie and Industrie","","",""
"uuid:8c1e1ede-3d44-4c8b-b0cd-8cbbe7ddc8d4","http://resolver.tudelft.nl/uuid:8c1e1ede-3d44-4c8b-b0cd-8cbbe7ddc8d4","Distributed edge-variant graph filters","Coutino, Mario (TU Delft Signal Processing Systems); Isufi, E. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","The main challenges distributed graph filters face in practice are the communication overhead and computational complexity. In this work, we extend the state-of-the-art distributed finite impulse response (FIR) graph filters to an edge-variant (EV) version, i.e., a filter where every node weights the signals from its neighbors with different values. Besides having the potential to reduce the filter order leading to amenable communication and complexity savings, the EV graph filter generalizes the class of classical and node-variant FIR graph filters. Numerical tests validate our findings and illustrate the potential of the EV graph filters to (i) approximate a user-provided frequency response; and (ii) implement distributed consensus with much lower orders than its direct contenders.","Edge-variant filters; finite-time consensus; FIR graph filters; graph filters; graph signal processing","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-11-25","","","Signal Processing Systems","","",""
"uuid:0fd21448-8065-4139-8cca-b9ca9ba83619","http://resolver.tudelft.nl/uuid:0fd21448-8065-4139-8cca-b9ca9ba83619","Graph Sampling with and Without Input Priors","Chepuri, S.P. (TU Delft Signal Processing Systems); Eldar, Yonina C. (Technion); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","In this paper the focus is on sampling and reconstruction of signals supported on nodes of arbitrary graphs or arbitrary signals that may be represented using graphs, where we extend concepts from generalized sampling theory to the graph setting. To recover such signals from a given set of samples, we develop algorithms that incorporate prior knowledge on the original signal when available such as smoothness or subspace priors related to the underlying graph. For reconstructing arbitrary signals, we constrain the reconstruction to the graph, and provide a consistent reconstruction method, in which both the reconstructed signal and the input yield exactly the same measurements. Given a set of graph frequency domain samples, the sampling and interpolation operations may be efficiently implemented using linear shift-invariant graph filters.","Consistent reconstruction; Frequency domain sampling; Graph sampling; Graph signal processing; Subspace prior","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-03-13","","","Signal Processing Systems","","",""
"uuid:9a666c46-6e53-466d-adf1-9424c07d6cb2","http://resolver.tudelft.nl/uuid:9a666c46-6e53-466d-adf1-9424c07d6cb2","Uniaxial Acoustic Vector Sensors for direction-of-arrival estimation","Nambur Ramamohan, K. (TU Delft Signal Processing Systems; Microflown Technologies); Comesaña, Daniel Fernandez (Microflown Technologies); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","In this paper, a specific reduced-channel Acoustic Vector Sensor (AVS) is proposed comprising one omni-directional microphone and only one particle velocity transducer, such that it can have an arbitrary orientation. Such a reduced transducer configuration is referred to as a Uniaxial AVS (U-AVS). The DOA performance of an array of U-AVSs is analyzed through its beampattern and compared to conventional configurations. It is shown that the U-AVS array beampattern results in an asymptotically biased estimate of the source location and it can be varied by choosing the orientation angles of the particle velocity transducers. Analytical expressions for the asymptotic bias of classical beamforming are proposed and verified both numerically as well as experimentally for Uniform Linear Arrays (ULAs). Furthermore, the Cramér-Rao Bound (CRB) and Mean Square Error (MSE) expressions are derived for a U-AVS array under a single source scenario and they are numerically evaluated for ULA. The implications of changing the orientations of the U-AVSs in the array on the MSE are discussed as well.","Acoustic Vector Sensor (AVS); Array processing; Beampattern; Cramér-Rao Bound (CRB); Direction-of-Arrival (DOA); Mean Square Error (MSE)","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-03-05","","","Signal Processing Systems","","",""
"uuid:ccb53832-4f09-4e4b-a5cc-0fd7c7c3b62b","http://resolver.tudelft.nl/uuid:ccb53832-4f09-4e4b-a5cc-0fd7c7c3b62b","Distributed Analytical Graph Identification","Chepuri, S.P. (TU Delft Signal Processing Systems); Coutino, Mario (TU Delft Signal Processing Systems); Marques, Antonio G. (King Juan Carlos University); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","An analytical algebraic approach for distributed network identification is presented in this paper. The information propagation in the network is modeled using a state-space representation. Using the observations recorded at a single node and a known excitation signal, we present algorithms to compute the eigenfrequencies and eigenmodes of the graph in a distributed manner. The eigenfrequencies of the graph may be computed using a generalized eigenvalue algorithm, while the eigenmodes can be computed using an eigenvalue decomposition. The developed theory is demonstrated using numerical experiments.","Distributed graph-spectral decomposition; Graph signal processing; Spectrum analysis; System identification; Topology identification","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-03-13","","","Signal Processing Systems","","",""
"uuid:588a54ef-2595-4f95-96d9-e4b40599f012","http://resolver.tudelft.nl/uuid:588a54ef-2595-4f95-96d9-e4b40599f012","Time-resolved gamma spectroscopy of single events","Wolszczak, W.W. (TU Delft RST/Luminescence Materials); Dorenbos, P. (TU Delft RST/Luminescence Materials)","","2018","In this article we present a method of characterizing scintillating materials by digitization of each individual scintillation pulse followed by digital signal processing. With this technique it is possible to measure the pulse shape and the energy of an absorbed gamma photon on an event-by-event basis. In contrast to time-correlated single photon counting technique, the digital approach provides a faster measurement, an active noise suppression, and enables characterization of scintillation pulses simultaneously in two domains: time and energy. We applied this method to study the pulse shape change of a CsI(Tl) scintillator with energy of gamma excitation. We confirmed previously published results and revealed new details of the phenomenon.","CsI(Tl); Data acquisition; Digital signal processing; Gamma spectroscopy; Pulse shape analysis; Time-resolved gamma spectroscopy","en","journal article","","","","","","Accepted Author Manuscript","","2020-01-06","","","RST/Luminescence Materials","","",""
"uuid:d237cc84-ea24-4020-a7bb-f4c9944c6616","http://resolver.tudelft.nl/uuid:d237cc84-ea24-4020-a7bb-f4c9944c6616","Improving olefin purification using metal organic frameworks with open metal sites","Luna-Triguero, A. (University Pablo de Olavide); Vicent Luna, J.M. (University Pablo de Olavide); Poursaeidesfahani, A. (TU Delft Engineering Thermodynamics); Vlugt, T.J.H. (TU Delft Engineering Thermodynamics); Sánchez-De-Armas, R. (University Pablo de Olavide); Gómez-Álvarez, P. (University Pablo de Olavide; Universidad de Huelva); Calero, S. (University Pablo de Olavide)","","2018","The separation and purification of light hydrocarbons is challenging in the industry. Recently, a ZJNU-30 metal-organic framework (MOF) has been found to have the potential for adsorption-based separation of olefins and diolefins with four carbon atoms [H. M. Liu et al. Chem. - Eur. J. 2016, 22, 14988-14997]. Our study corroborates this finding but reveals Fe-MOF-74 as a more efficient candidate for the separation because of the open metal sites. We performed adsorption-based separation, transient breakthrough curves, and density functional theory calculations. This combination of techniques provides an extensive understanding of the studied system. Using this MOF, we propose a separation scheme to obtain a high-purity product.","breakthrough curves; butene isomers; coordinatively unsaturated sites; molecular simulation; separation process","en","journal article","","","","","","Accepted Author Manuscript","","2019-04-19","","","Engineering Thermodynamics","","",""
"uuid:5b47f226-b284-4198-badb-b0de8f6e4079","http://resolver.tudelft.nl/uuid:5b47f226-b284-4198-badb-b0de8f6e4079","A stochastic process based reliability prediction method for LED driver","Sun, Bo (Guangdong University of Technology); Fan, Xuejun (Lamar University); van Driel, W.D. (TU Delft Electronic Components, Technology and Materials; Philips Lighting Research); Cui, Chengqiang (Guangdong University of Technology); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials)","","2018","In this study, we present a general methodology that combines the reliability theory with physics of failure for reliability prediction of an LED driver. More specifically, an integrated LED lamp, which includes an LED light source with statistical distribution of luminous flux, and a driver with a few critical components, is considered. The Wiener process is introduced to describe the randomness of lumen depreciation. The driver's survival probability is described using a general Markov Chain method. The system compact thermal model (physics of failure model) is developed to couple with the reliability methods used. Two scenarios are studied: Scenario S1 considers constant driver's operation temperature, while Scenario S2 considers driver's temperature rise due to lumen depreciation. It has been found that the wide life distribution of LEDs will lead to a large range of the driver's survival probability. The proposed analysis provides a general approach for an electronic system to integrate the reliability method with physics models.","LED driver; LED lamp; Lumen depreciation; Reliability prediction; Stochastic process","en","journal article","","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2021-05-28","","","Electronic Components, Technology and Materials","","",""
"uuid:feaedc8f-708b-4a18-aeba-7a9840f8e66a","http://resolver.tudelft.nl/uuid:feaedc8f-708b-4a18-aeba-7a9840f8e66a","Mechanical properties of bi- and poly-crystalline ice","Cao, Pinqiang (China University of Geosciences, Wuhan; Xiamen University); Wu, Jianyang (Xiamen University; Norwegian University of Science and Technology (NTNU)); Zhang, Zhisen (Xiamen University); Fang, Bin (China University of Geosciences, Wuhan); Peng, Li (China University of Geosciences, Wuhan); Li, Tianshu (The George Washington University); Vlugt, T.J.H. (TU Delft Engineering Thermodynamics); Ning, Fulong (China University of Geosciences, Wuhan)","","2018","A sound knowledge of fundamental mechanical properties of water ice is of crucial importance to address a wide range of applications in earth science, engineering, as well as ice sculpture and winter sports, such as ice skating, ice fishing, ice climbing, bobsleighs, and so on. Here, we report large-scale molecular dynamics (MD) simulations of mechanical properties of bi- and poly-crystalline hexagonal ice (Ih) under mechanical loads. Results show that bicrystals, upon tension, exhibit either brittle or ductile fracture, depending on the microstructure of grain boundaries (GBs), whereas they show ductile fracture by amorphization and crystallographic slips emitted from GBs under compression. Under shearing, the strength of bicrystals exhibits a characteristic plateau or sawtooth behavior drawn out the initial elastic strains. Nanograined polycrystals are destabilized by strain-induced amorphization and collective GB sliding. Their mechanical responses depend on the grain size. Both tensile and compressive strengths decrease as grain size decreases, showing inverse Hall-Petch weakening behavior. Large fraction of amorphous water structure in polycrystals with small grain size is mainly responsible for the inverse Hall-Petch softening. Dislocation nucleation and propagation are also identified in nanograined ice, which is in good agreement with experimental measurements. Beyond the elastic strain, a combination of GB sliding, grain rotation, amorphization and recrystallization, phase transformation, and dislocation nucleation dominate the plastic deformation in both bicrystals and polycrystals.","Polycrystalline material; Chemical processes; Ductility; Crystallization; Materials analysis; Geophysics; Molecular dynamics; Deformation; Phase transitions; Crystallographic defects","en","journal article","","","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:682521a8-561e-4529-ab22-77b8f88fbc4f","http://resolver.tudelft.nl/uuid:682521a8-561e-4529-ab22-77b8f88fbc4f","Distributed Wiener-Based Reconstruction of Graph Signals","Isufi, E. (TU Delft Signal Processing Systems; University of Perugia); Di Lorenzo, Paolo (University of Perugia); Banelli, Paolo (University of Perugia); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","This paper proposes strategies for distributed Wiener-based reconstruction of graph signals from subsampled measurements. Given a stationary signal on a graph, we fit a distributed autoregressive moving average graph filter to a Wiener graph frequency response and propose two reconstruction strategies: i) reconstruction from a single temporal snapshot; ii) recursive signal reconstruction from a stream of noisy measurements. For both strategies, a mean square error analysis is performed to highlight the role played by the filter response and the sampled nodes, and to propose a graph sampling strategy. Our findings are validated with numerical results, which illustrate the potential of the proposed algorithms for distributed reconstruction of graph signals.","ARMA graph filters; Graph signal processing; stationary graph signals; Wiener regularization","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-03-01","","","Signal Processing Systems","","",""
"uuid:8f58e13e-f215-40cf-9709-3bd10270d844","http://resolver.tudelft.nl/uuid:8f58e13e-f215-40cf-9709-3bd10270d844","Sparsest Network Support Estimation: A Submodular Approach","Coutino, Mario (TU Delft Signal Processing Systems); Chepuri, S.P. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2018","In this work, we address the problem of identifying the underlying network structure of data. Different from other approaches, which are mainly based on convex relaxations of an integer problem, here we take a distinct route relying on algebraic properties of a matrix representation of the network. By describing what we call possible ambiguities on the network topology, we proceed to employ sub-modular analysis techniques for retrieving the network support, i.e., network edges. To achieve this we only make use of the network modes derived from the data. Numerical examples showcase the effectiveness of the proposed algorithm in recovering the support of sparse networks.","graph learning; Graph signal processing; network deconvolution; network topology inference; sparse graphs","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-02-20","","","Signal Processing Systems","","",""
"uuid:fd394cab-5f3e-407d-bfa3-20717b96db31","http://resolver.tudelft.nl/uuid:fd394cab-5f3e-407d-bfa3-20717b96db31","Design and Custom Fabrication of a Smart Temperature Sensor for an Organ-on-a-chip Platform","Martins Da Ponte, R. (TU Delft Bio-Electronics); Giagka, Vasiliki (TU Delft Bio-Electronics; Fraunhofer Institute for Reliability and Microintegration IZM); Serdijn, W.A. (TU Delft Bio-Electronics)","","2018","This paper reports on the design and fabrication of a time-mode signal-processing in situ temperature sensor customized for an organ-on-a-chip (OOC) application. The circuit was fabricated using an in-house integrated circuit (IC) technology that requires only seven lithographic steps and is compatible with MEMS fabrication process. The proposed circuit is developed to provide the first out-of-incubator temperature monitoring of cell cultures on an OOC platform in a monolithic fabrication. Measurement results on wafer reveal a temperature measurement resolution of less than ±0.2 °C (3σ) and a maximum nonlinearity error of less than 0.3% across a temperature range from 25 °C to 100 °C.","IC MEMS co-design; Organ-on-a-chip; Smart temperature sensor; Time-mode signal-processing","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Bio-Electronics","","",""
"uuid:d1e87a58-67dc-497f-b300-ed7614e9ead9","http://resolver.tudelft.nl/uuid:d1e87a58-67dc-497f-b300-ed7614e9ead9","Exploring HPC and Big Data Convergence: A Graph Processing Study on Intel Knights Landing","Uta, Alexandru (Vrije Universiteit Amsterdam); Varbanescu, A.L. (Universiteit van Amsterdam); Musaafir, Ahmed (Vrije Universiteit Amsterdam); Lemaire, Chris (Student TU Delft); Iosup, A. (TU Delft Data-Intensive Systems; Vrije Universiteit Amsterdam)","O'Conner, L. (editor); Torres, H. (editor)","2018","The question 'Can big data and HPC infrastructure converge?' has important implications for many operators and clients of modern computing. However, answering it is challenging. The hardware is currently different, and fast evolving: big data uses machines with modest numbers of fat cores per socket, large caches, and much memory, whereas HPC uses machines with larger numbers of (thinner) cores, non-trivial NUMA architectures, and fast interconnects. In this work, we investigate the convergence of big data and HPC infrastructure for one of the most challenging application domains, the highly irregular graph processing. We contrast through a systematic, experimental study of over 300,000 core-hours the performance of a modern multicore, Intel Knights Landing (KNL) and of traditional big data hardware, in processing representative graph workloads using state-of-the-art graph analytics platforms. The experimental results indicate KNL is convergence-ready, performance-wise, but only after extensive and expert-level tuning of software and hardware parameters.","Big Data; Graph Processing; HPC; HPC Big Data convergence; Intel Knights Landing; Performance evaluation","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Data-Intensive Systems","","",""
"uuid:b07cacdf-0923-4ee9-931a-2faccaecb631","http://resolver.tudelft.nl/uuid:b07cacdf-0923-4ee9-931a-2faccaecb631","Sampling and Reconstruction of Signals on Product Graphs","Ortiz-Jimenez, Guillermo (Student TU Delft); Coutino, Mario (TU Delft Signal Processing Systems); Chepuri, S.P. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","Cui, Shuguang (editor); Jafarkhani, Hamid (editor)","2018","In this paper, we consider the problem of subsampling and reconstruction of signals that reside on the vertices of a product graph, such as sensor network time series, genomic signals, or product ratings in a social network. Specifically, we leverage the product structure of the underlying domain and sample nodes from the graph factors. The proposed scheme is particularly useful for processing signals on large-scale product graphs. The sampling sets are designed using a low-complexity greedy algorithm and can be proven to be near-optimal. To illustrate the developed theory, numerical experiments based on real datasets are provided for sampling 3D dynamic point clouds and for active learning in recommender systems.","Active learning; Graph signal processing; Product graphs; Sparse sampling; Submodularity","en","conference paper","IEEE","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-08-21","","","Signal Processing Systems","","",""
"uuid:4d93c633-519c-41c5-9a5e-9845cf69386f","http://resolver.tudelft.nl/uuid:4d93c633-519c-41c5-9a5e-9845cf69386f","From strategic goals to business model innovation paths: an exploratory study","Heikkilä, Marikka (University of Turku); Bouwman, W.A.G.A. (TU Delft Information and Communication Technology); Heikkilä, Jukka (University of Turku)","","2018","Purpose: The purpose of this paper is to analyse how different strategic goals of (micro-, small- and medium-sized firms=SMEs) relate to the business model innovation (BMI) paths that SMEs take when improving their business. Design/methodology/approach: The authors conducted 11 in-depth case studies involving SMEs innovating their business models (BMs). Findings: The authors found evidence that strategic goals of SMEs (start new business, growth and profitability) lead them to alternative innovation path in terms of BM components affected. Growth seekers start from the right-hand side of a BM Canvas, while profitability seekers start from the back end, the left side of a Canvas; and new businesses adopt a cyclical approach considering BM components in turn, while at the same time redesigning and testing the BM. The findings of this study also indicate that all three paths gradually lead to improvement in several BM components. Research limitations/implications: Findings indicate that a strategic management view in which strategic goals define BMI also applies to SMEs. The distinctive BMI paths that the authors identified provide evidence to suggest that, although the SMEs may not have an explicitly formulated strategy, their strategic goals determine the type of improvements they make to their BM. All three SME groups started their improvements from different BM components and changed several elements in their BMs in a specific order, forming distinctive BMI paths. Finally, to understand the BMI in SMEs better, more research is needed into BMI processes and into the way BMI is managed in SMEs. Practical implications: The findings of this study help SMEs to anticipate the next steps in their path towards an improved BM. By mirroring their approach to the BMI paths, they can better manage their BM makeover process and focus on their innovation activities. For providers of BMI tools and methods, the study indicates which SME innovation tasks could be supported by tools and how the tools should be aligned with the BMI paths. Originality/value: BMI is attracting growing attention in both research and practice. However, knowledge concerning BMI in SMEs is limited. The authors contributed to BMI research by focussing on the BMI paths of SMEs, i.e. the often sequential, non-linear and iterative steps taken to improve the business by making changes to specific BM components.","Business model; Business model innovation; Business model innovation path; Innovation process; Multi-case study; SME","en","journal article","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:801f9d4a-86e0-4000-8277-1db980f5ab3c","http://resolver.tudelft.nl/uuid:801f9d4a-86e0-4000-8277-1db980f5ab3c","Linearized Bregman Iterations for Automatic Optical Fiber Fault Analysis","Lunglmayr, Michael (Johannes Kepler University Linz); Castro do Amaral, G. (TU Delft QID/Tittel Lab; TU Delft QuTech Advanced Research Centre; Kavli institute of nanoscience Delft; Pontifical Catholic University of Rio de Janeiro)","","2018","Supervision of the physical layer of optical networks is an extremely relevant subject. To detect fiber faults, single-ended solutions, such as the optical time-domain reflectometry (OTDR), allow for precise measurements of fault profiles. Combining the OTDR with a signal processing approach for high-dimensional sparse parameter estimation allows for automated and reliable results in reduced time. In this paper, a measurement system composed of a photon-counting OTDR data acquisition unit and a processing unit based on a linearized Bregman iterations' algorithm for automatic fault finding is proposed. An in-depth comparative study of the proposed algorithm's fault-finding prowess in the presence of noise is presented. Characteristics, such as sensitivity, specificity, processing time, and complexity, are analyzed in simulated environments. Real-life measurements that are conducted using the photon-counting OTDR subsystem for data acquisition and the linearized Bregman-based processing unit for automated data analysis demonstrated accurate results. It is concluded that the proposed measurement system is particularly well-suited to the task of fault finding. The natural characteristic of the algorithm fosters embedding the solution in digital hardware, allowing for reduced costs and processing time.","Fault location; Market research; optical fiber measurements; Optical network units; Optical pulses; Optical sensors; Optical signal processing; optical time-domain reflectometry (OTDR); Optical variables measurement; signal processing; Signal processing algorithms; signal processing algorithms.","en","journal article","","","","","","Accepted Author Manuscript","","","","","QID/Tittel Lab","","",""
"uuid:be1dc9c7-6aac-477b-b22d-e95c2cbeb01f","http://resolver.tudelft.nl/uuid:be1dc9c7-6aac-477b-b22d-e95c2cbeb01f","Separating Geophysical Signals Using GRACE and High-Resolution Data: A Case Study in Antarctica","Engels, Olga (TU Delft Physical and Space Geodesy; Universität Bonn); Gunter, B.C. (Georgia Institute of Technology); Riva, R.E.M. (TU Delft Physical and Space Geodesy); Klees, R. (TU Delft Physical and Space Geodesy)","","2018","To fully exploit data from the Gravity Recovery and Climate Experiment (GRACE), we separate geophysical signals observed by GRACE in Antarctica by deriving high-spatial resolution maps for present-day glacial isostatic adjustment (GIA) and ice-mass changes with the least possible noise level. For this, we simultaneously (i) improve the postprocessing of gravity data and (ii) consistently combine them with high-resolution data from Ice Cloud and land Elevation Satellite altimeter (ICESat) and Regional Atmospheric Climate Model 2.3 (RACMO). We use GPS observations to discriminate between various candidate spatial patterns of vertical motions caused by GIA. The ICESat-RACMO combination determines the spatial resolution of estimated ice-mass changes. The results suggest the capability of the developed approach to retrieve the complex spatial pattern of present-day GIA, such as a pronounced subsidence in the proximity of the Kamb Ice Stream and pronounced uplift in the Amundsen Sea Sector.","Antarctica; data-driven approach; GIA; GRACE post-processing; high spatial resolution; ice mass changes","en","journal article","","","","","","","","2019-07-31","","","Physical and Space Geodesy","","",""
"uuid:4bda1d0e-7caa-427d-91da-9dd9e0c5a18e","http://resolver.tudelft.nl/uuid:4bda1d0e-7caa-427d-91da-9dd9e0c5a18e","A fuzzy multi-criteria decision making approach for analyzing the risks and benefits of opening data","Luthfi, A. (TU Delft Information and Communication Technology; Universitas Islam Indonesia); Rehena, Z. (TU Delft Information and Communication Technology; Aliah University); Janssen, M.F.W.H.A. (TU Delft Information and Communication Technology); Crompvoets, Joep (Katholieke Universiteit Leuven)","","2018","Governments are releasing their data to the public to accomplish benefits like the creation of transparency, accountability, citizen engagement and to enable business innovation. At the same time, decision-makers are reluctant to open their data due to some potential risks like misuse, sensitivity, ownership, and inaccuracy of the data. The goal of the study presented in this paper is to develop a Fuzzy Multi-Criteria Decision Making (FMCDM) approach to analyze the risks and benefits to determine the decision to open a dataset. FMCDM is chosen due to its capability to measure and weight the relative importance of the criteria. FMCDM need the weighting of criteria as input. For this Fuzzy Analytical Hierarchy Process (FAHP) is utilized by collecting input from experts’ knowledge and expertise. The scores for each criterion are summed up to rank the importance of the alternatives. Four main criteria are used, e.g. data sensitivity and data ownership representing risks criteria, and data availability and data trustworthy as benefits criteria. For each criterion, there were two sub-criteria identified. Four types of decisions to open data can be made: completely open, maintain suppression, provide limited access, and remain closed. A health patient record dataset is used to illustrate the approach. In further research, we recommend to develop automated approaches that take a dataset as an input and can provide an advice.","Analytic hierarchy process; Benefits; Fuzzy; Multi-criteria decision making; Open data; Risks","en","conference paper","Springer","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2019-04-12","","","Information and Communication Technology","","",""
"uuid:f6d01dfc-9267-4a21-8432-8a6a1eb30daa","http://resolver.tudelft.nl/uuid:f6d01dfc-9267-4a21-8432-8a6a1eb30daa","Reservoir lithology determination from seismic inversion results using markov processes","Feng, R. (TU Delft Applied Geology)","Luthi, S.M. (promotor); Drijkoningen, G.G. (copromotor); Delft University of Technology (degree granting institution)","2017","For reservoir characterization, the subsurface heterogeneity needs to be qualified in which the distribution of lithologies is an essential part since it determines the location and migration paths of hydrocarbons. Preliminary analysis of well-log data could help to identify various lithologies in a one-dimensional direction (depth), while the lateral information is missing because of the sparse locations. On the other hand, a larger areal coverage of the target reservoir could be provided by seismic data, and from the inversion thereof, inferences of lithologies could be made. However, just like other geophysical inversions, translation of seismic inversion results to these categorical variables (lithologies) is a non-unique problem, which means that different lithologies could produce the same, or similar, property responses. In order to mitigate this problem, geological prior information should be introduced in the sense of Bayes’ theorem. Thus, the main motivation for this thesis is to investigate the usage of geological prior information in the classification of reservoir lithologies from properties obtained from seismic inversion. Different methods have been tried in this process in order to fully understand their performances and to make comparisons.","seismic inversion; Markov processes; reservoir lithology","en","doctoral thesis","","978-94-6186-864-0","","","","","","","","","Applied Geology","","",""
"uuid:66da67a6-cf90-4a71-822e-3d27d0e7ec8d","http://resolver.tudelft.nl/uuid:66da67a6-cf90-4a71-822e-3d27d0e7ec8d","Applications of passive microwave data to monitor inundated areas and model stream flow","Shang, H. (TU Delft Optical and Laser Remote Sensing)","Menenti, M. (promotor); Jia, L. (promotor); Steele-Dunne, S.C. (copromotor); Delft University of Technology (degree granting institution)","2017","The observation of surface water bodies in all weather conditions and better knowledge about inundation patterns are important for water resource management and flood early warning. Microwave radiometers at 37 GHz were applied to observe and study the inundation pattern in large subtropical floodplains in China, i.e. the Poyang Lake and Dongting Lake floodplains, due to the trade-off between the capability to penetrate hydrometeors and vegetation, revisiting time, and spatial coverage and resolution. Taking the shallow sensing depth at 37 GHz into account, open water, inundated area and water saturated soil surface all determine the surface emittance measured by the radiometer. Thus, Water Saturated Surface (WSS) is defined as the combination of these three land surface elements.
In subtropical regions, seasonal changes in vegetation cover and various surface roughness conditions are the major challenges for the observation of surface water bodies with microwave radiometers. Atmospheric attenuation, observation gaps and errors in the microwave observations reduce the quality of daily radiometric observations. To deal with the attenuation due to vegetation and surface roughness, a two-step model was developed: the first step is to retrieve the polarization difference emissivity from Polarization Difference Brightness Temperature (PDBT) at 37 GHz with the simplified radiative transfer model and the vegetation optical thickness at 37 GHz parameterized from Normalized Difference Vegetation Index (NDVI) ; the second step is to retrieve the fractional area of WSS from the emissivity difference with a linear model, which can be parameterized according to the Qp surface roughness model. To remove the noise and extract the surface signal (including surface emittance and vegetation attenuation) from the daily PDBT time series, the Time Series Analysis Procedure (TSAP) was developed to identify the spectral features of noisy components in the frequency domain and remove them with a proper filter. The overall method combined the TSAP and the two-step model to derive daily observation of WSS area. The retrieved WSS area in the Poyang Lake floodplain was in a good agreement with the lake area observed from MODerate-resolution Imaging Spectroradiometer (MODIS) and Advanced Synthetic Aperture Radar (ASAR). The observations and analysis of the inundation patterns in the Poyang Lake and Dongting Lake floodplains with this method illustrated the close relationship between inundated area, precipitation and stream flow.
Furthermore, a lumped hydrological model, named the discrete rainfall-runoff model, was developed to fully use the retrieved WSS area and to study the role of inundated area in stream flow production. This model simulates stream flow as the integration of contributions of antecedent precipitation in a certain period. Three implementations of the model were developed with the help of ground water table depth and the retrieved WSS area. The case study in the Xiangjiang River basin (upstream catchment of the Dongting Lake floodplain), China, illustrated that: 1) the longest duration of antecedent precipitation is a key parameter to determine model performance; 2) long duration would increase the model uncertainty and lead to overfitting; 3) the application of the WSS area can reduce the duration required to achieve a reasonable accuracy. The model parameters indicated the interaction between stream flow and various water storages, and the calibration results of three implementations implied the recharge period of ground water.
the use of specialized devices, in specialized server designs, optimized for a certain class of workloads, is gaining momentum. Data movement has been demonstrated to be a significant drain of energy, and is furthermore a performance bottleneck when data is moved over an interconnect with limited bandwidth. With data becoming an increasingly important asset for governments, companies, and individuals, the development of systems optimized on a device and server level for data-intensive workloads, is
necessary. In this work, we explore some of the fundamentals required for such a system, as well as key use-cases...","Square Kilometre Array; computer architecture; near-data processing; high-performance computing","en","doctoral thesis","","978-94-6186-821-3","","","","","","","","","Computer Engineering","","",""
"uuid:1ed4a443-3934-436a-ad8b-0e496b0f7be7","http://resolver.tudelft.nl/uuid:1ed4a443-3934-436a-ad8b-0e496b0f7be7","Form Follows Feeling: The Acquisition of Design Expertise and the function of Aesthesis in the Design Process","Curry, T.M. (TU Delft OLD Urban Compositions)","Bekkering, H.C. (promotor); Delft University of Technology (degree granting institution)","2017","While the consideration of functional and technical criteria, as well as a sense of coherence are basic requirements for solving a design problem; it is the ability to induce an intended quality of aesthetic experience that is the hallmark of design expertise. Expert designers possess a highly developed sense of design, or what in this research is called aesthesis. Reflection on 25 years teaching design in the USA, Hungary, and China led to the observation that most successful design students, more than intellectual ability, drawing, model making or drive, all seemed to possess what may be called an intuitive sense of good design. It is not that they already know how to design, or that they are natural designers, it is that they have a more developed sense aesthesis. This research takes a multi-disciplinary approach to build a theory that describes what is involved in acquiring design expertise,identifies how aesthesis functions in the design process, and determines if what appears to be an intuitive sense of design is just natural talent or an acquired ability.
While the consideration of functional and technical criteria, as well as a sense of coherence are basic requirements for solving a design problem; it is the ability to induce an intended quality of aesthetic experience that is the hallmark of design expertise. Expert designers possess a highly developed sense of design, or what in this research is called aesthesis. Reflection on 25 years teaching design in the USA, Hungary, and China led to the observation that most successful design students, more than intellectual ability, drawing, model making or drive, all seemed to possess what may be called an intuitive sense of good design. It is not that they already know how to design, or that they are natural designers, it is that they have a more developed sense aesthesis. This research takes a multi-disciplinary approach to build a theory that describes what is involved in acquiring design expertise,identifies how aesthesis functions in the design process, and determines if what appears to be an intuitive sense of design is just natural talent or an acquired ability.
The research started with topics related to design methodology, which led to questions related to cognitive psychology, especially theories of problem-solving. An in-depth review of research in embodied cognition challenged the disembodied concept of the mind and related presuppositions, and reintroduced the body as an essential aspect of human cognition. This lead to related topics including: pre-noetic (pre-verbal) knowledge, the cognitive architecture of the brain, sense mechanisms and perception, limitations and types of memory as well as the processing capacity of the brain, and especially how emotions/feelings function in human cognition, offering insight into how designing functions as a cognitive process.
The research provides evidence that more than technical rationality, expert designers rely heavily on a highly developed embodied way of knowing (tacit knowledge) througout the design process that allows them to know more than they can say. Indeed, this is the hallmark of expert performers in many fields. However, this ability is not to be understood as natural talent, but as a result of an intense developmental process that includes years of deliberate practice necessary to restructure the brain and adapt the body in a manner that facilitates exceptional performance. For expert designers it is aesthesis (a kind of body knowledge), functioning as a meta-heuristic, that allows them to solve a complex problem situation in a manner that appears effortless. Aesthesis is an ability that everyone possesses, but that expert designers have highly developed and adapted to allow them to produce buildings and built environments that induce an intended quality of aesthetic experience in the user. It is a cognitive ability that functions to both (re)structure the design problem and evaluate the solution; and allows the designer to inhabit the design world feelingly while seeking aesthetic resonance that anticipates the quality of atmosphere another is likely to experience. This ability is critical to the acquisition of design expertise.","Architecture; Cognitive Psychology; Problem-solving; Embodied Cognition; Tacit Knowledge; Design Process","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-63-3","","","","A+BE | Architecture and the Built Environment No 6 (2017)","","","","","OLD Urban Compositions","","",""
"uuid:da12a36e-38d1-45cb-a366-c3458e851226","http://resolver.tudelft.nl/uuid:da12a36e-38d1-45cb-a366-c3458e851226","Stochastic renewal process models for estimation of damage cost over the life-cycle of a structure","Pandey, Mahesh D. (University of Waterloo); van der Weide, J.A.M. (TU Delft Applied Probability)","","2017","In the life-cycle cost analysis of a structure, the total cost of damage caused by external hazards like earthquakes, wind storms and flood is an important but highly uncertain component. In the literature, the expected damage cost is typically analyzed under the assumption of either the homogeneous Poisson process or the renewal process in an infinite time horizon (i.e., asymptotic solution). The paper reformulates the damage cost estimation problem as a compound renewal process and derives general solutions for the mean and variance of total cost, with and without discounting, over the life cycle of the structure. The paper highlights a fundamental property of the renewal process, referred to as renewal decomposition, which is a key to solving a wide range of life cycle analysis problems. The proposed formulation generalizes the results given in the literature, and it can be used to optimize the design and life cycle performance of structures.","Discounted cost; Expected cost; Life cycle analysis; Renewal function rate; Renewal process; Seismic risk; Stochastic process; Structural safety","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:76d5317f-6216-489c-a0c2-8c65db0a62c4","http://resolver.tudelft.nl/uuid:76d5317f-6216-489c-a0c2-8c65db0a62c4","Production of low porosity recycled sand from construction and demolition waste","Ulsen, Carina; Kahn, Henrique; Antoniassi, Juliana L.; Martins, Isabel","","2017","Abstract only - The existing construction waste recycling technologies and standards have been long applied in construction and demolition (C&D) waste recycling, mainly focused on the production and use of coarse recycled aggregates. Very few papers and process focus on the production of recycled sand although some previous results show that fine aggregates fraction (below 4.8 mm) represents around 40 to 60% (in mass) of the Brazilian waste and it is usually down cycled as road sub-base or disposed in landfills. The quality of the recycled aggregate is strictly related to the content of porous and low strength phases, as the patches of cement attached to the recycled aggregate. Despite being the crucial factor for aggregate performance, the removal of adhered cement paste is not a simple task. Some technologies have already been described in the literature, even though, to the moment none of these technologies has clearly succeeded in reaching the large market available. This paper presents a summary of the main properties of the sand produced from C&D waste by tertiary crushing at vertical shaft impactor crusher (attrition and abrasion comminution). Additionally mineral processing technologies were applied on the attained product, such as density concentration by shaking table and spirals and magnetic separation at rare earth roll separator. The main properties of recycled sand are discussed and compared to the previous C&D waste, respectively: apparent density, water absorption, chemical composition, porosity, particle shape and cement paste content. The results demonstrated that the comminution by vertical shaft impact crusher allowed producing a VSI-sand with low porosity and water absorption. The concentration by shaking table at narrow sieve fractions led to a product with low porosity and reduced content of cement paste, although products from spiral concentrator for the bulk sample had only the effect of classification instead of concentration. The non-magnetic products presented similar characteristics of the heavy product from the shaking table plus separation of micaceous minerals. The aim of producing sand from construction and demolition waste through mineral processing unit operations changes the recycling approach and contributes for upcycling the recycled sand.","recycled sand; mineral processing; CDW recycling; products characterization; mineral separability","en","conference paper","","","","","","","","","","","","","",""
"uuid:a219c081-c5a5-40c6-b02e-90e80f996121","http://resolver.tudelft.nl/uuid:a219c081-c5a5-40c6-b02e-90e80f996121","Multi-criteria study for recycled concrete aggregate separation process","Braymand, Sandrine; Roux, Sébastien; Fares, Hanaa; Feugeas, Françoise","","2017","Construction and demolition waste stream has generated news materials that may be reintroduced into new concrete, e.g. as recycled aggregates. The specific feature of recycled concrete aggregates (RCA) is the presence of hardened mortars influencing their behaviour. This study aims to distinguish processes that allow the complete separation and quantification of attached mortar. The laboratory developed method has to be transferable on a wider scale to be exploited on a real recycling platform. This study is linked to the RECYBETON National Research Project involving public research laboratories, institutes and private companies. Several methods are tested in laboratory conditions to determine their efficiency. They are based on mechanical, chemical and physical principles. The definition of this efficiency concept and the mortar content denomination are also discussed. Originality of this study consists in optimizing hot or cold thermal processes combined with a mechanical treatment. To perform that, a multi-criteria phase experiment was carried out and several values of the multi-criteria parameters were optimized. Results allow for a thorough knowledge of treatment efficiency. However, it appears that not any one method is 100% satisfactory as aggregates are never completely cleaned and/or are damaged.","process; Recycled concrete aggregates; attached mortar; separation","en","conference paper","","","","","","","","","","","","","",""
"uuid:d89170ea-3fd9-4ad6-b9f2-ad56863ecd5e","http://resolver.tudelft.nl/uuid:d89170ea-3fd9-4ad6-b9f2-ad56863ecd5e","It is possible to economically upcyle recycled aggregates","Le Guen, Lauredan; Ferro, Gerard; Cazacliu, Bogdan","","2017","The waste management is a pregnant problematic in the European continent and in France too. The new paradigm called circular economy aims to be driver for the waste valorization. This objective is particularly applied in the civil engineering, the first generator of CDW (Construction and Demolition Waste) in France. In this context, except the eco-construction, the recycling or reusing is currently necessary for the CDW valorization. But, the economic criteria such sell price or manufacturing cost are brakes to facilitate the waste valorization. In addition, using the waste is confronted to standards and NIMBY (Not In My BackYard). To lift these brakes, the waste upcycling allows producing high quality materials from CDW. This choice seems to be a relevant orientation in terms of economic and technology. For further details, the recycling plant can produce high quality materials with a sell price more competitive than the high quality raw materials. This economic advantage can guarantee the investment necessary to set up this kind of plant. For example, the Esterel group has chosen this orientation since several years. The obtained results of the materials valorization are compared to the results from a study realized by the French energy agency. Finally, this comparison allows concluding about the relevancy of the CDW upcycling, which can be economically sustainable.","waste management; CDW; upcycling; economic; technologic process","en","conference paper","","","","","","","","","","","","","",""
"uuid:d26390f0-e071-416a-94ff-a387ad487737","http://resolver.tudelft.nl/uuid:d26390f0-e071-416a-94ff-a387ad487737","The influence of parent concrete and milling intensity on the properties of recycled aggregates","Lotfi, Somayeh; Rem, Peter; Deja, Jan; Mróz, Radosław","","2017","The C2CA concrete recycling process consists of a combination of smart demolition, gentle grinding of the crushed concrete in an autogenous mill, and a novel dry classification technology called ADR to remove the fines. The` main factors in the C2CA process which influence the properties of Recycled Aggregates or Recycled Aggregate Concrete (RAC) include the type of Parent Concrete (PC), the intensity of autogenous milling and ADR cutsize point. This study aims to investigate the influence of PC and intensity of the autogenous milling on the quality of the produced recycled aggregates. Three types of concrete which are frequently demanded in the Dutch market were cast as PC and their fresh and hardened properties were tested. After near one year curing of PC samples, they were recycled independently while the aforementioned recycling factors were varied. The effects of different recycling variables on the water absorption, density, crushing resistance and durability of produced recycled aggregates were investigated. According to the results, type of the parent concrete is the predominant factor influencing the properties of the recycled aggregates. Milling intensity was found to be effective on improving the properties of recycled aggregates coming from weaker parent concrete. The experimental results suggest that among various milling intensities, milling at medium shear and medium compression improves the overall quality of RA.","C2CA process; Concrete recycling; Recycled aggregate; Recycled aggregate concrete; ADR","en","conference paper","","","","","","","","","","","","","",""
"uuid:04761262-4d5e-4186-93cc-3b5fc3dc4000","http://resolver.tudelft.nl/uuid:04761262-4d5e-4186-93cc-3b5fc3dc4000","The Manufacture of Lightweight Aggregates from Recycled Masonry Rubble","Mueller, Anette; Liebezeit, Steffen; Leydolph, Barbara; Palzer, Ulrich","","2017","At present, heterogeneous and fine-grained masonry rubble can only be recycled at very low level. To overcome this limitation, the material was employed as feedstock for the production of lightweight aggregates in a thermal process similar to that used in the manufacture of expanded clay and expanded slate. To that end, the fundamental suitability of masonry rubble as a raw material was evaluated. Experiments were carried out which indicated that lightweight granules with defined, adjustable properties similar to those of natural-materialbased aggregates could be manufactured from masonry rubble. Structural lightweight concretes produced with these secondary aggregates achieved comparable performance to lightweight concretes produced with conventional expanded clay. Lightweight recycled building material aggregates represent a product that hardly requires any primary resources in its manufacture.","lightweight aggregate; Masonry rubble; feedstock recycling; expansion process; rotary kiln","en","conference paper","","","","","","","","","","","","","",""
"uuid:a0ea505d-744b-471a-b8ef-a9b0a4eedae3","http://resolver.tudelft.nl/uuid:a0ea505d-744b-471a-b8ef-a9b0a4eedae3","Characterization of filler fraction from the production of recycled sand from construction and demolition waste","Martens, Isabel; Ulsen, Carina; Kahn, Henrique; Landmann, Mirko; Mueller, Anette","","2017","The concept of circular economy envisages that the highest utility and value of a material can be achieved through proper management along its all life cycle. Therefore reducing waste, through increased reuse and recycling, is a major driver to close the loop. Within this framework, construction and demolition waste, which constitutes up to 30% of world waste generation, is a priority in order to achieve a high level of resource efficiency in the construction sector. In general, it is noticed that the fine fraction from traditional CDW processing, representing from 40 to 60% mass of the CDW mineral phases, is not suitable to be used as a secondary material for manufacture of concrete where high mechanical performance is required. Moreover, the filler fraction produced on recycling processes is rarely considered for recycling due to its high surface area and irregular shape. Filler properties depend on previous construction material composition and mineral processing applied for recycling. This paper aims to characterize the filler fraction from the production of recycled sand from construction and demolition waste processing, by particle size distribution, chemical and mineralogical composition and specific density and highlights some drawbacks related to their application as filler in cementitious building materials.","recycled filler; mineral processing; CDW recycling; products characterization","en","conference paper","","","","","","","","","","","","","",""
"uuid:57ba5640-0606-4ede-a227-e8b07e0b88f7","http://resolver.tudelft.nl/uuid:57ba5640-0606-4ede-a227-e8b07e0b88f7","Solution-Based Fabrication of Polycrystalline Si Thin-Film Transistors from Recycled Polysilanes","Sberna, P.M. (TU Delft Tera-Hertz Sensing); Trifunovic, M. (TU Delft QID/Ishihara Lab; TU Delft QuTech Advanced Research Centre); Ishihara, R. (TU Delft QID/Ishihara Lab; TU Delft Quantum Integration Technology; TU Delft QuTech Advanced Research Centre; Kavli institute of nanoscience Delft)","","2017","Currently, research has been focusing on printing and laser crystallization of cyclosilanes, bringing to life polycrystalline silicon (poly-Si) thin-film transistors (TFTs) with outstanding properties. However, the synthesis of these Sibased inks is generally complex and expensive. Here, we prove that a polysilane ink, obtained as a byproduct of silicon gases and derivatives, can be used successfully for the synthesis of poly-Si by laser annealing, at room temperature, and for n- and p-channel TFTs. The devices, fabricated according to CMOS compatible processes at 350 °C, showed field effect mobilities up to 8 and 2 cm2/(V s) for n- and p-type TFTs, respectively. The presented method combines a low-cost coating technique with the usage of recycled material, opening a route to a convenient and sustainable production of large-area, flexible, and even disposable/single-use electronics.","Disilane byproduct; Byproduct recycle; Polysilane; Low-temperature fabrication; Thin-film transistor; Polycrystalline silicon; Solution processing","en","journal article","","","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:ac9cac89-36a8-457d-8c3e-cee73091aa93","http://resolver.tudelft.nl/uuid:ac9cac89-36a8-457d-8c3e-cee73091aa93","A process-based, idealized study of salt and sediment dynamics in well-mixed estuaries","Wei, X. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); Schuttelaars, H.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Estuaries are important ecosystems accommodating a large variety of living species. Estuaries are also important to people by their demand of freshwater for drinking, irrigation, and industry. Due to natural changes and human activities, the estuarine water quality, influenced by both salinity and turbidity (the cloudiness or haziness of water), has been greatly changed in many estuaries and may continue to change in the future. To predict and control the salt intrusion and the occurrence of high turbidity levels, it is essential to understand the physical mechanisms governing the estuarine dynamics. To that end, this thesis provides a systematical investigation of the dominant physical processes which result in salt intrusion and the formation of the Estuarine Turbidity Maxima (ETM’s) in well-mixed estuaries.","well-mixed; salt dynamics; sediment transport; Idealized model; Estuarine turbidity maxima; gravitational circulation; lateral processes; tidal advection","en","doctoral thesis","","978-94-6186-828-2","","","","","","","","","Mathematical Physics","","",""
"uuid:c67d351f-c18c-4918-9097-2b7d9a76cc87","http://resolver.tudelft.nl/uuid:c67d351f-c18c-4918-9097-2b7d9a76cc87","Challenges and opportunities: One stop processing of automatic large-scale base map production using airborne lidar data within gis environment case study: Makassar City, Indonesia","Widyaningrum, E. (TU Delft Optical and Laser Remote Sensing; Geospatial Information Agency); Gorte, B.G.H. (TU Delft Optical and Laser Remote Sensing)","","2017","LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.","Accuracy assessment; Automation; Base map production; GIS; LiDAR processing","en","journal article","","","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:60b44e91-6dc3-4269-a673-1a8b4cb29d82","http://resolver.tudelft.nl/uuid:60b44e91-6dc3-4269-a673-1a8b4cb29d82","Filtering Random Graph Processes over Random Time-Varying Graphs","Isufi, E. (TU Delft Signal Processing Systems); Loukas, A. (Swiss Federal Institute of Technology); Simonetto, A. (IBM Research Ireland); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2017","Graph filters play a key role in processing the graph spectra of signals supported on the vertices of a graph. However, despite their widespread use, graph filters have been analyzed only in the deterministic setting, ignoring the impact of stochasticity in both the graph topology and the signal itself. To bridge this gap, we examine the statistical behavior of the two key filter types, finite impulse response and autoregressive moving average graph filters, when operating on random time-varying graph signals (or random graph processes) over random time-varying graphs. Our analysis shows that 1) in expectation, the filters behave as the same deterministic filters operating on a deterministic graph, being the expected graph, having as input signal a deterministic signal, being the expected signal, and 2) there are meaningful upper bounds for the variance of the filter output. We conclude this paper by proposing two novel ways of exploiting randomness to improve (joint graph-time) noise cancellation, as well as to reduce the computational complexity of graph filtering. As demonstrated by numerical results, these methods outperform the disjoint average and denoise algorithm and yield a (up to) four times complexity reduction, with a very little difference from the optimal solution.","graph filters; graph signal denoising; graph sparsification; random graph signals; random graphs; Signal processing on graphs","en","journal article","","","","","","Accepted Author Manuscript","","","","","Signal Processing Systems","","",""
"uuid:be493008-78cc-46fa-937e-ee7de4559d98","http://resolver.tudelft.nl/uuid:be493008-78cc-46fa-937e-ee7de4559d98","Large-scale copyright enforcement and human rights safeguards in online markets: A comparative study of 22 sanctioning mechanisms from eight enforcement strategies in six countries between 2004 and 2014","Kreiken, F.H. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Delft University of Technology (degree granting institution)","2017","The Internet has facilitated large-scale copyright infringement. Fighting this one case at a time via the standard civil law procedures is costly in terms of time and money. In response, copyright holders have adopted new strategies that they hoped would be more effective at large-scale enforcement. The question is how these large-scale enforcement procedures impact procedural safeguards, most notably due process and fair trial. Empirical research into large-scale recent enforcement strategies has been limited and tended to focus on individual strategies, rather than on comparative analysis across different strategies and jurisdictions. This dissertation sets out to fill this gap. It presents a comparative empirical study of 22 sanctioning mechanisms from eight enforcement strategies in six countries between 2004 and 2014. It adds to the discussion on the regulation of copyrights and can help policymakers by illustrating the effect of choices made in different countries. For researchers in the field of information policy and law, it provides a detailed description of different enforcement initiatives and adds to the studies on human rights. This study shows that copyright enforcement procedures are able to scale-up only by offering fewer procedural safeguards to sanctioned parties. Similarly, procedures that impact on a larger scale provide less severe sanctions. The research has also shown that infringement levels are by and large unchanged, and that enforcement procedures create substantial costs, a significant portion of which are externalized to the state and to third parties.","copyright; enforcement; safeguards; due process; fair trial; privacy; internet; governance","en","doctoral thesis","","978-90-79787-69-2","","","","NGInfra PhD Thesis Series on Infrastructures; 81 Advisor: David Koepsell","","","","","Organisation & Governance","","",""
"uuid:3856b4ce-e91c-47ea-8bb7-bbf15d5ebdd7","http://resolver.tudelft.nl/uuid:3856b4ce-e91c-47ea-8bb7-bbf15d5ebdd7","Structural and Exchange Components in Processes of Neighbourhood Change: A Social Mobility Approach","Modai-Snir, T. (TU Delft OLD Urban Renewal and Housing); van Ham, M. (TU Delft OLD Urban Renewal and Housing; Institute for the Study of Labor (IZA))","","2017","Neighbourhood socioeconomic change is a complex phenomenon which is driven by multiple macro- and micro-level processes. Most theoretical and empirical work has focused on the role of urban-level processes, such as filtering, life-cycle, and social dynamics. For individual neighbourhoods, these processes generate flows of different socioeconomic groups, which consequently leads to an exchange of relative positions in the metropolitan hierarchy (‘exchange’ effect) where some neighbourhoods move up and others move down. Neighbourhoods are also affected by structural processes that operate beyond the urban level. They can generate upward or downward shifts of absolute income across a whole array of neighbourhoods (‘growth/decline’ effect), or change the inequality among neighbourhoods, where the top and bottom of the neighbourhood hierarchy move away from each other (‘inequality’ effect). A common practice in neighbourhood change studies is to represent neighbourhood status as relative to the respective metropolitan area; this neutralizes the ‘growth/decline’ effect and ignores an important source of change and divergence between neighbourhoods in different regions. Some specific relative measures of change do capture the ‘inequality’ effect but confound the ‘exchange’ and ‘inequality’ effects. This paper introduces a methodological approach that decomposes total neighbourhood socioeconomic change, measured in absolute terms, into components of ‘exchange’, ‘growth/decline’ and ‘inequality’. It applies a decomposition method presented by Van Kerm (2004), developed for understanding income mobility of individuals. The approach (1) acknowledges the role of structural processes in neighbourhood change, and (2) makes a distinction between different processes that generate neighbourhood change which is essential for comparative research.","urban change; neighbourhood change; structural processes; relative change; absolute change; inequality","en","working paper","Forschungsinstitut zur Zukunft der Arbeit/ Institute for the Study of Labor (IZA)","","","","","","","","","","OLD Urban Renewal and Housing","","",""
"uuid:ca460c12-bf40-4f40-9240-d6c8aa5c37ca","http://resolver.tudelft.nl/uuid:ca460c12-bf40-4f40-9240-d6c8aa5c37ca","Monolithic 3D Wafer Level Integration: Applied for Smart LED Wafer Level Packaging","Koladouz Esfahani, Z. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); van Zeijl, H.W. (copromotor); Delft University of Technology (degree granting institution)","2017","","System-in-Package; 3D wafer-level integration; LED; high aspect ratio lithography; multi-step imaging; smart silicon interposer; side-wall photodiode; blue/UV selective photodetector; sensor readout; BiCMOS process; wafer-level optic; multidisciplinary simulation; optical simulation","en","doctoral thesis","","978-94-028-0513-0","","","","","","2019-01-19","","","Electronic Components, Technology and Materials","","",""
"uuid:0f4de061-1a6e-49eb-8182-edb56f4bce9d","http://resolver.tudelft.nl/uuid:0f4de061-1a6e-49eb-8182-edb56f4bce9d","Forecasting design and decision paths in ship design using the ship-centric Markov decision process model","Kana, A.A.","","2017","","decision making; ship design; Markov decision process; eigenvector analysis; ballast water compliance","","journal article","","","","","","","","2042-01-01","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:8d4be5f9-037d-4bcb-ba74-da638938290a","http://resolver.tudelft.nl/uuid:8d4be5f9-037d-4bcb-ba74-da638938290a","Smart sensors in asphalt: monitoring key process parameters during and post construction","Miller, Seirgei (University of Twente); Chakraborty, Joyraj (University of Twente); van der Vegt, Jurian (University of Twente); Brinkerink, Daan (University of Twente); Erkens, S. (TU Delft Pavement Engineering); Liu, X. (TU Delft Pavement Engineering); Anupam, K. (TU Delft Pavement Engineering); Sluer, Berwich (Royal Boskalis Westminster); Mohajeri, Mohamad (Royal Boskalis Westminster)","","2017","The Fibre Bragg Gratings (FBG) technology based on integrated photonics, offers specific benefits including thermal mapping, damage detection, shape- and distributed sensing. This makes it useful for determining pavement behaviour during extreme weather conditions e.g. freeze-thaw cycles when harsh winter conditions could damage the asphalt surfacing layer. However, the harsh construction environment and traffic loading highlights the high-risk challenge of installing the sensor into the asphalt layer in a noninvasive manner so that the key parameters are accurately measured during and after construction.","smart sensors; Fibre Bragg Gratings technology; asphalt construction process","en","journal article","","","","","","","","","","","Pavement Engineering","","",""
"uuid:cf9fb39a-8a9b-4f6f-93b2-2f6ddbd28c0b","http://resolver.tudelft.nl/uuid:cf9fb39a-8a9b-4f6f-93b2-2f6ddbd28c0b","A simulator-assisted workshop for teaching chemostat cultivation in academic classes on microbial physiology","Hakkaart, X.D.V. (TU Delft BT/Industriele Microbiologie); Pronk, J.T. (TU Delft BT/Biotechnologie); van Maris, A.J.A. (TU Delft BT/Industriele Microbiologie)","","2017","Understanding microbial growth and metabolism is a key learning objective of microbiology and biotechnology courses, essential for understanding microbial ecology, microbial biotechnology and medical microbiology. Chemostat cultivation, a key research tool in microbial physiology that enables quantitative analysis of growth and metabolism under tightly defined conditions, provides a powerful platform to teach key features of microbial growth and metabolism. Substrate-limited chemostat cultivation can be mathematically described by four equations. These encompass mass balances for biomass and substrate, an empirical relation that describes distribution of
consumed substrate over growth and maintenance energy requirements (Pirt equation), and a Monod-type equation that describes the relation between substrate concentration and substrate-consumption rate. The authors felt that the abstract nature of these mathematical equations and a lack of visualization contributed to a suboptimal operative understanding of quantitative microbial physiology among students who followed their Microbial Physiology B.Sc. courses. The studio-classroom workshop presented here was developed to improve student understanding of quantitative physiology by a set of question-guided simulations. Simulations are run on Chemostatus, a specially
developed MATLAB-based program, which visualizes key parameters of simulated chemostat cultures as they proceed from dynamic growth conditions to steady state.
In practice, the workshop stimulated active discussion between students and with their teachers. Moreover, its introduction coincided with increased average exam scores for questions on quantitative microbial physiology. The workshop can be easily implemented in formal microbial physiology courses or used by individuals seeking to test and improve their understanding of quantitative microbial physiology and/orchemostat cultivation.","Topology optimization; Additive manufacturing; Manufacturing process planning; Space-time optimization","en","journal article","","","","","","","","","","BT/Biotechnologie","BT/Industriele Microbiologie","","",""
"uuid:060d9f3e-561c-4806-87fb-dd378cfaf3b0","http://resolver.tudelft.nl/uuid:060d9f3e-561c-4806-87fb-dd378cfaf3b0","Adaptive efficient global optimization of systems with independent components","Rehman, S.U. (TU Delft Computational Design and Mechanics); Langelaar, Matthijs (TU Delft Computational Design and Mechanics)","","2017","We present a novel approach for efficient optimization of systems consisting of expensive to simulate components and relatively inexpensive system-level simulations. We consider the types of problem in which the components of the system problem are independent in the sense that they do not exchange coupling variables, however, design variables can be shared across components. Component metamodels are constructed using Kriging. The metamodels are adaptively sampled based on a system level infill sampling criterion using Efficient Global Optimization. The effectiveness of the technique is demonstrated by applying it on numerical examples and an engineering case study. Results show steady and fast converge to the global deterministic optimum of the problems.","Efficient global optimization; Expected improvement; Gaussian processes; Infill sampling; Kriging; System optimization","en","journal article","","","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:8bb206bb-9b79-43b7-9465-e54921986c51","http://resolver.tudelft.nl/uuid:8bb206bb-9b79-43b7-9465-e54921986c51","Atom-counting in High Resolution Electron Microscopy: TEM or STEM – That's the question","Gonnissen, J. (Universiteit Antwerpen); De Backer, A. (Universiteit Antwerpen); den Dekker, A.J. (TU Delft Team Michel Verhaegen; Universiteit Antwerpen); Sijbers, J. (Universiteit Antwerpen); Van Aert, S. (Universiteit Antwerpen)","","2017","In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice.","Data processing/image processing; Electron microscope design and characterisation; High-resolution (scanning) transmission electron microscopy (HR (S)TEM)","en","journal article","","","","","","Accepted Author Manuscript","","2019-03-06","","","Team Michel Verhaegen","","",""
"uuid:a49a086e-2566-4505-99ed-5c99307f1e45","http://resolver.tudelft.nl/uuid:a49a086e-2566-4505-99ed-5c99307f1e45","Mean value modelling of diesel engine combustion based on parameterized finite stage cylinder process","Sui, Congbiao (TU Delft Ship Design, Production and Operations; Harbin Engineering University); Song, Enzhe (Harbin Engineering University); Stapersma, D. (TU Delft Ship Design, Production and Operations); Ding, Y. (Harbin Engineering University)","","2017","Mean value diesel engine models are widely used since they focus on the main engine performance and can operate on a time scale that is longer than one revolution, and as a consequence use time steps that are much longer than crank-angle models. Mean Value First Principle (MVFP) models are not primarily intended for engine development but are used for systems studies that are become more important for engine users. In this paper two new variants of Seiliger processes, which characterize the engine in-cylinder process with finite stages are investigated, in particular their ability to correctly model the heat release by a finite number of combustion parameters. MAN 4L20/27 engine measurements are used and conclusions were drawn which Seiliger variant should be used and how to model the combustion shape for more engines. Then expressions to calculate the combustion parameters have been obtained by using a multivariable regression fitting method. The mean value diesel engine model has been corrected and applied to the simulation of a ship propulsion system which contains a modern MAN 18V32/40 diesel engine in its preliminary design stage and the simulation results have shown the capability of the integration of MVFP model into a larger system.","Combustion parameter; Combustion process; Diesel engine; Mean value model; Modelling","en","journal article","","","","","","Accepted Author Manuscript","","2019-05-15","","","Ship Design, Production and Operations","","",""
"uuid:bf5cf32b-7a32-4aec-8041-ceae75f170b2","http://resolver.tudelft.nl/uuid:bf5cf32b-7a32-4aec-8041-ceae75f170b2","A new procedure for deep sea mining tailings disposal","Ma, W. (TU Delft Transport Engineering and Logistics); Schott, D.L. (TU Delft Transport Engineering and Logistics); Lodewijks, G. (University of New South Wales)","","2017","Deep sea mining tailings disposal is a new environmental challenge related to water pollution, mineral crust waste handling, and ocean biology. The objective of this paper is to propose a new tailings disposal procedure for the deep sea mining industry. Through comparisons of the tailings disposal methods which exist in on-land mining and the coastal mining fields, a new tailings disposal procedure, i.e., the submarine–backfill–dam–reuse (SBDR) tailings disposal procedure, is proposed. It combines deep sea submarine tailings disposal, backfill disposal, tailings dam disposal, and tailings reuse disposal for the deep sea mining industry. Then, the analytic network process (ANP) method is utilized to evaluate the performances of different tailings disposal methods. The evaluation results of the ANP show that the new proposed tailings disposal procedure is the most suitable for the deep sea mining industry","analytic network process (ANP); deep sea mining; environmental challenge; SBDR tailings disposal procedure; waste handling; OA-Fund TU Delft","en","journal article","","","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:6d804919-e3a6-4539-be09-a8a566a3c731","http://resolver.tudelft.nl/uuid:6d804919-e3a6-4539-be09-a8a566a3c731","The prioritization and categorization method (PCM) process evaluation at Ericsson: a case study","Ohlsson, Jens (Stockholm University); Han, Shengnan (Stockholm University); Bouwman, W.A.G.A. (TU Delft Information and Communication Technology)","","2017","Purpose: The purpose of this paper is to demonstrate and evaluate the prioritization and categorization method (PCM), which facilitates the active participation of process stakeholders (managers, owners, customers) in process assessments. Stakeholders evaluate processes in terms of effectiveness, efficiency and relevance against certain contextual business and industry factors. This collective evaluation serves as a foundation for the management decision-making process regarding process improvement and redesign. Design/methodology/approach: The PCM is examined based on a case study at Ericsson. In total, 55 stakeholders, representing different organizational levels and functions, assessed eight core processes. Follow-up interviews and feedback after the evaluation sessions were collected for triangulation purpose. Findings: The PCM helps Ericsson evaluate its processes within business context and industry environments. The results show that, to realize seamless end-to-end processes in the eight assessed processes, Ericsson has to make a greater effort to improve its process structures, governance and culture for fulfilling the needs of future business. Ericsson Steering Group is satisfied with the insights provided and has decided to train more stakeholders to use PCM. Research limitations/implications: This research is based on a single case within a specific organizational setting. The results may not be necessary generalizable to other business and industry settings. Organizations need to configure PCM in consideration of their own processes and business contingencies to explore and fulfil their process improvement purposes. Originality/value: This paper presents a new context-aware, easy-to-use and holistic method for business process management (BPM), the PCM. The method requires the active engagement of stakeholders, it focusses on developing dynamic BPM capabilities and fully embeds organizational contingencies and contextual factors in the decision-making regarding BPM. This paper contributes a novel method to explorative BPM.","Business process management; Case study; Contextual awareness; Evaluation; Explorative; Prioritization and categorization method","en","journal article","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:f082a2e0-27a6-4d8b-b53a-6ee7f6e64491","http://resolver.tudelft.nl/uuid:f082a2e0-27a6-4d8b-b53a-6ee7f6e64491","Computationally-driven engineering of sublattice ordering in a hexagonal AlHfScTiZr high entropy alloy","Rogal, Lukasz (Polish Academy of Sciences); Bobrowski, Piotr (Polish Academy of Sciences); Körmann, F.H.W. (TU Delft (OLD) MSE-7); Divinski, Sergiy (University of Münster); Stein, Frank (Max-Planck-Institut für Eisenforschung); Grabowski, Blazej (Max-Planck-Institut für Eisenforschung)","","2017","Multi-principle element alloys have enormous potential, but their exploration suffers from the tremendously large range of configurations. In the last decade such alloys have been designed with a focus on random solid solutions. Here we apply an experimentally verified, combined thermodynamic and first-principles design strategy to reverse the traditional approach and to generate a new type of hcp Al-Hf-Sc-Ti-Zr high entropy alloy with a hitherto unique structure. A phase diagram analysis narrows down the large compositional space to a well-defined set of candidates. First-principles calculations demonstrate the energetic preference of an ordered superstructure over the competing disordered solid solutions. The chief ingredient is the Al concentration, which can be tuned to achieve a D019 ordering on the hexagonal lattice. The computationally designed D019 superstructure is experimentally confirmed by transmission electron microscopy and X-ray studies. Our scheme enables the exploration of a new class of high entropy alloys.","Atomistic models; Design, synthesis and processing; Metals and alloys","en","journal article","","","","","","","","","","","(OLD) MSE-7","","",""
"uuid:4ed74e00-be0e-4c77-b0b9-74829ce848c7","http://resolver.tudelft.nl/uuid:4ed74e00-be0e-4c77-b0b9-74829ce848c7","Point spread function based image reconstruction in optical projection tomography","Trull, A.K. (TU Delft ImPhys/Quantitative Imaging); van der Horst, J. (TU Delft ImPhys/Quantitative Imaging); Palenstijn, W.J. (Centrum Wiskunde & Informatica (CWI)); van Vliet, L.J. (TU Delft ImPhys/Quantitative Imaging); van Leeuwen, Tristan (Universiteit Utrecht); Kalkman, J. (TU Delft ImPhys/Quantitative Imaging)","","2017","As a result of the shallow depth of focus of the optical imaging system, the use of standard filtered back projection in optical projection tomography causes space-variant tangential blurring that increases with the distance to the rotation axis. We present a novel optical tomographic image reconstruction technique that incorporates the point spread function of the imaging lens in an iterative reconstruction. The technique is demonstrated using numerical simulations, tested on experimental optical projection tomography data of single fluorescent beads, and applied to high-resolution emission optical projection tomography imaging of an entire zebrafish larva. Compared to filtered back projection our results show greatly reduced radial and tangential blurring over the entire 5.2x5.2 mm2 field of view, and a significantly improved signal to noise ratio.","Image reconstruction techniques; Inverse problems; omographic image processing","en","journal article","","","","","","Accepted Author Manuscript","","2018-09-20","","","ImPhys/Quantitative Imaging","","",""
"uuid:af81ff69-3827-41d5-900b-e1e543111d9f","http://resolver.tudelft.nl/uuid:af81ff69-3827-41d5-900b-e1e543111d9f","Effect of residual H2O2 from advanced oxidation processes on subsequent biological water treatmen: A laboratory batch study","Wang, F. (TU Delft Sanitary Engineering); van Halem, D. (TU Delft Sanitary Engineering); Liu, G. (TU Delft Sanitary Engineering; Oasen); Lekkerkerker, K. (Dunea); van der Hoek, J.P. (TU Delft Sanitary Engineering; Waternet)","","2017","H2O2 residuals from advanced oxidation processes (AOPs) may have critical impacts on the microbial ecology and performance of subsequent biological treatment processes, but little is known. The objective of this study was to evaluate how H2O2 residuals influence sand systems with an emphasis on dissolved organic carbon (DOC) removal, microbial activity change and bacterial community evolution. The results from laboratory batch studies showed that 0.25 mg/L H2O2 lowered DOC removal by 10% while higher H2O2 concentrations at 3 and 5 mg/L promoted DOC removal by 8% and 28%. A H2O2 dosage of 0.25 mg/L did not impact microbial activity (as measured by ATP) while high H2O2 dosages, 1, 3 and 5 mg/L, resulted in reduced microbial activity of 23%, 37% and 37% respectively. Therefore, DOC removal was promoted by the increase of H2O2 dosage while microbial activity was reduced. The pyrosequencing results illustrated that bacterial communities were dominated by Proteobacteria. The presence of H2O2 showed clear influence on the diversity and composition of bacterial communities, which became more diverse under 0.25 mg/L H2O2 but conversely less diverse when the dosage increased to 5 mg/L H2O2. Anaerobic bacteria were found to be most sensitive to H2O2 as their growth in batch reactors was limited by both 0.25 and 5 mg/L H2O2 (17–88% reduction). In conclusion, special attention should be given to effects of AOPs residuals on microbial ecology before introducing AOPs as a pre-treatment to biological (sand) processes. Additionally, the guideline on the maximum allowable H2O2 concentration should be properly evaluated.","Advanced oxidation processes; Hydrogen peroxide; Sand systems; Water treatment; Microbial community","en","journal article","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:8de1ad10-0e28-40d2-9bd1-0e7a8bb13ff5","http://resolver.tudelft.nl/uuid:8de1ad10-0e28-40d2-9bd1-0e7a8bb13ff5","Guided proposals for simulating multi-dimensional diffusion bridges","Schauer, M.R. (Universiteit Leiden); van der Meulen, F.H. (TU Delft Statistics); Van Zanten, Harry (Universiteit van Amsterdam)","","2017","A Monte Carlo method for simulating a multi-dimensional diffusion process conditioned on hitting a fixed point at a fixed future time is developed. Proposals for such diffusion bridges are obtained by superimposing an additional guiding term to the drift of the process under consideration. The guiding term is derived via approximation of the target process by a simpler diffusion processes with known transition densities. Acceptance of a proposal can be determined by computing the likelihood ratio between the proposal and the target bridge, which is derived in closed form.We show under general conditions that the likelihood ratio is well defined and show that a class of proposals with guiding term obtained from linear approximations fall under these conditions.","Change of measure; Data augmentation; Linear processes; Multidimensional diffusion bridge","en","journal article","","","","","","Accepted Author Manuscript","","","","","Statistics","","",""
"uuid:5d2cbe5f-0531-4040-b5ef-f56f69441c21","http://resolver.tudelft.nl/uuid:5d2cbe5f-0531-4040-b5ef-f56f69441c21","The influence of parent concrete and milling intensity on the properties of recycled aggregates","Lotfi, Somayeh (TU Delft Materials and Environment); Rem, P.C. (TU Delft Materials and Environment); Deja, J (AGH University of Science and Technology); Mroz, R (AGH University of Science and Technology)","Di Maio, F. (editor); Lotfi, S. (editor); Bakker, M. (editor); Hu, M. (editor); Vahidi, A. (editor)","2017","The C2CA concrete recycling process consists of a combination of smart demolition, gentle grinding of the crushed concrete in an autogenous mill, and a novel dry classification technology called ADR to remove the fines. The` main factors in the C2CA process which influence the properties of Recycled Aggregates or Recycled Aggregate Concrete (RAC) include the type of Parent Concrete (PC), the intensity of autogenous milling and ADR cut-size point. This study aims to investigate the influence of PC and intensity of the autogenous milling on the quality of the produced recycled aggregates. Three types of concrete which are frequently demanded in the Dutch market were cast as PC and their fresh and hardened properties were tested. After near one year curing of PC samples, they were recycled independently while the aforementioned recycling factors were varied. The effects of different recycling variables on the water absorption, density, crushing resistance and durability of produced recycled aggregates were investigated. According to the results, type of the parent concrete is the predominant factor influencing the properties of the recycled aggregates. Milling intensity was found to be effective on improving the properties of recycled aggregates coming from weaker parent concrete. The experimental results suggest that among various milling intensities, milling at medium shear and medium compression improves the overall quality of RA","C2CA process-Concrete recycling; Recycled aggregate; Recycled aggregate concrete- ADR","en","conference paper","Delft University of Technology","","","","","","","","","","Materials and Environment","","",""
"uuid:fa51c75c-4d46-4b3b-985e-369feafa3198","http://resolver.tudelft.nl/uuid:fa51c75c-4d46-4b3b-985e-369feafa3198","Assessing storage and substitution as power flexibility enablers in industrial processes","Henriques, Margarida Vigario (Lisbon Technical University); Stikkelman, R.M. (TU Delft Energie and Industrie)","","2017","Renewable energy sources are currently presented as an economically viable and environmentally safe option in the near future. A major constraint to the incorporation of wind and solar generation at large scale is the increase of variability in the power system. To assure the perpetual balance between power production and gross consumption a significant improvement on power systems flexibility is required. Such flexibility in the power system can be achieved by two options on the demand side through demand response obtained through industrial processes: Storage and Substitution. The power system model in study contemplates the purchase of electricity from the Dutch Balancing Market. The electricity prices of the Balancing Market are considered unpredictable. The storage system is characterized by the size of the storage tank and by ramp up/down rates, reflecting the changing speed of the production levels. The substitution system is characterized by the ramp rate of substitution between electricity and an alternative energy carrier as input. The impact of the parameters on the Power System Flexibility when connected to the balancing market under several scenarios was analyzed by Linny-R, a software tool that applies Linear Programming optimization. For the storage system a bigger tank size, a higher ramp rate and a high level of predictability will increase the flexibility of the system. As the actual predictability of the balancing market is limited, the flexibility is limited too, which makes the storage system a questionable option. For the substitution system flexibility is increased by a higher ramp. The effect of the predictability is less dominant, which makes substitution a suitable flexibility enabler for the current Dutch market system. In this context, a restructure of the energy markets, considering the prices predictability, is suggested, as a way of easing the penetration of renewable energy sources.","Balancing Market; Demand Response; Industrial Processes; Power Flexibility; Variability","en","conference paper","IEEE","","","","","","","","","","Energie and Industrie","","",""
"uuid:7914ebba-ee18-4943-9d4e-d08c98252e4c","http://resolver.tudelft.nl/uuid:7914ebba-ee18-4943-9d4e-d08c98252e4c","The responsible research and innovation (RRI) maturity model: Linking theory and practice","Stahl, Bernd Carsten (De Montfort University); Obach, Michael (Technalia); Yaghmaei, E. (TU Delft Ethics & Philosophy of Technology; University of Southern Denmark); Ikonen, Veikko (VTT Technical Research Center of Finland); Chatfield, Kate (University of Central Lancashire); Brem, Alexander (Friedrich-Alexander-Universität Erlangen-Nürnberg; University of Southern Denmark)","","2017","Responsible research and innovation (RRI) is an approach to research and innovation governance aiming to ensure that research purpose, process and outcomes are acceptable, sustainable and even desirable. In order to achieve this ambitious aim, RRI must be relevant to research and innovation in industry. In this paper, we discuss a way of understanding and representing RRI that resonates with private companies and lends itself to practical implementation and action. We propose the development of an RRI maturity model in the tradition of other well-established maturity models, linked with a corporate research and development (R&D) process. The foundations of this model lie in the discourse surrounding RRI and selected maturity models from other domains as well as the results of extensive empirical investigation. The model was tested in three industry environments and insights from these case studies show the model to be viable and useful in corporate innovation processes. With this approach, we aim to inspire further research and evaluation of the proposed maturity model as a tool for facilitating the integration of RRI in corporate management.","Industry; Innovation process; Maturity model; R&D management; Responsible research and innovation; RRI","en","journal article","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:3d2f11ed-294e-40d9-9ed6-a102967dd402","http://resolver.tudelft.nl/uuid:3d2f11ed-294e-40d9-9ed6-a102967dd402","Laser-driven resonance of dye-doped oil-coated microbubbles: Experimental study","Lajoinie, Guillaume (University of Twente); Lee, Jeong Yu (University of Oxford); Owen, Joshua (University of Oxford); Kruizinga, P. (TU Delft ImPhys/Acoustical Wavefield Imaging; Erasmus MC); de Jong, N. (TU Delft ImPhys/Acoustical Wavefield Imaging; Erasmus MC); Van Soest, Gijs (Erasmus MC); Stride, Eleanor (University of Oxford); Versluis, Michel (University of Twente)","","2017","Photoacoustic (PA) imaging offers several attractive features as a biomedical imaging modality, including excellent spatial resolution and functional information such as tissue oxygenation. A key limitation, however, is the contrast to noise ratio that can be obtained from tissue depths greater than 1-2 mm. Microbubbles coated with an optically absorbing shell have been proposed as a possible contrast agent for PA imaging, offering greater signal amplification and improved biocompatibility compared to metallic nanoparticles. A theoretical description of the dynamics of a coated microbubble subject to laser irradiation has been developed previously. The aim of this study was to test the predictions of the model. Two different types of oil-coated microbubbles were fabricated and then exposed to both pulsed and continuous wave (CW) laser irradiation. Their response was characterized using ultra high-speed imaging. Although there was considerable variability across the population, good agreement was found between the experimental results and theoretical predictions in terms of the frequency and amplitude of microbubble oscillation following pulsed excitation. Under CW irradiation, highly nonlinear behavior was observed which may be of considerable interest for developing different PA imaging techniques with greatly improved contrast enhancement.","Lasers; Microbubbles; Acoustic signal processing; Medical imaging; Acoustic transducers","en","journal article","","","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:1af33f64-5230-4842-8e0d-3da57f031edc","http://resolver.tudelft.nl/uuid:1af33f64-5230-4842-8e0d-3da57f031edc","Business Model Implementation within Networked Enterprises: A Case Study on a Finnish Pharmaceutical Project","Solaimani, Sam (Nyenrode Business Universiteit); Heikkilä, Marikka (University of Turku); Bouwman, W.A.G.A. (TU Delft Information and Communication Technology; Åbo Akademi University)","","2017","In many entrepreneurial projects, the concept of the business model (BM) is used to describe a business idea at a high-level and in a holistic way. However, existing literature pays less attention to implementation (or execution) of BM. Implementation becomes more complex when a BM is proposed by or requires a network of collaborating enterprises. The aim of this paper is to provide an approach based on empirical research that supports BM transition from design to implementation. The empirical data used in this paper is based on a case study involving an innovative project in the pharmaceutical sector in Finland. The case analysis demonstrates how a high-level BM needs careful consideration of its operational components from a network perspective to secure both value creation and capture. Drawing on the analysis, six concluding propositions on BM implementation in networked settings are put forward.","Business model; Business processes; Case study; Networked enterprises","en","journal article","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:50598239-0f8c-4760-9955-3305c6700bf0","http://resolver.tudelft.nl/uuid:50598239-0f8c-4760-9955-3305c6700bf0","Comparing gravity-based to seismic-derived lithosphere densities: A case study of the British Isles and surrounding areas","Root, B.C. (TU Delft Astrodynamics & Space Missions); Ebbing, J (Christian-Albrechts-University); van der Wal, W. (TU Delft Astrodynamics & Space Missions); England, R.W. (University of Leicester); Vermeersen, L.L.A. (TU Delft Physical and Space Geodesy; TU Delft Astrodynamics & Space Missions)","","2017","Lithospheric density structure can be constructed from seismic tomography, gravity modelling, or using both data sets. The different approaches have their own uncertainties and limitations. This study aims to characterize and quantify some of the uncertainties in gravity modelling of lithosphere densities. To evaluate the gravity modelling we compare gravity-based and seismic velocity-based approaches to estimating lithosphere densities. In this study, we use a crustal model together with lithospheric isostasy and gravity field observations to estimate lithosphere densities. To quantify the effect of uncertainty in the crustal model, three models are implemented in this study: CRUST1.0, EuCrust-07 and a high-resolution P-wave velocity model of the British Isles and surrounding areas. Different P-wave velocity-to-density conversions are used to study the uncertainty in these conversion methods. The crustal density models are forward modelled into gravity field quantities using a method that is able to produce spherical harmonic coefficients. Deep mantle signal is assumed to be removed by removing spherical harmonic coefficients of degree 0–10 in the observed gravity field. The uncertainty in the resulting lithosphere densities due to the different crustal models is ±110 kg m−3, which is the largest uncertainty in gravity modelling. Other sources of uncertainty, such as the VP to density conversion (±10 kg m−3), long-wavelength truncation (±5 kg m−3), choice of reference model (<±20 kg m−3) and Lithosphere Asthenosphere Boundary uncertainty (±30 kg m−3), proved to be of lesser importance. The resulting lithosphere density solutions are compared to density models based on a shear wave velocity model. The comparison shows that the gravity-based models have an increased lateral resolution compared to the tomographic solutions. However, the density anomalies of the gravity-based models are three times higher. This is mainly due to the high resolution in the gravity field. To account for this, the gravity-based density models are filtered with a spatial Gaussian filter with 200 km half-width, which results in similar density estimates (±35 kg m−3) with the tomographic approach. Lastly, the gravity-based density is used to estimate laterally varying conversion factors, which correlate with major tectonic regions. The independent gravity-based solutions could help in identifying different compositional domains in the lithosphere, when compared to the tomographic solutions.","Gravity anomalies and Earth structure; Mantle processes; Europe","en","journal article","","","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:045ff678-d64b-4e01-aa1b-b07f7e331a8f","http://resolver.tudelft.nl/uuid:045ff678-d64b-4e01-aa1b-b07f7e331a8f","On a unified framework for linear nuisance parameters","Hu, Y. (TU Delft Signal Processing Systems); Leus, G.J.T. (TU Delft Signal Processing Systems)","","2017","Estimation problems in the presence of deterministic linear nuisance parameters arise in a variety of fields. To cope with those, three common methods are widely considered: (1) jointly estimating the parameters of interest and the nuisance parameters; (2) projecting out the nuisance parameters; (3) selecting a reference and then taking differences between the reference and the observations, which we will refer to as “differential signal processing.” A lot of literature has been devoted to these methods, yet all follow separate paths. Based on a unified framework, we analytically explore the relations between these three methods, where we particularly focus on the third one and introduce a general differential approach to cope with multiple distinct nuisance parameters. After a proper whitening procedure, the corresponding best linear unbiased estimators (BLUEs) are shown to be all equivalent to each other. Accordingly, we unveil some surprising facts, which are in contrast to what is commonly considered in literature, e.g., the reference choice is actually not important for the differencing process. Since this paper formulates the problem in a general manner, one may specialize our conclusions to any particular application. Some localization examples are also presented in this paper to verify our conclusions.","Best linear unbiased estimator (BLUE); Differential signal processing; Joint estimation; Linear nuisance parameters; Orthogonal subspace projection (OSP); Source localization","en","journal article","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:ff2bae58-fea8-4a77-b02c-ce577f1ea03b","http://resolver.tudelft.nl/uuid:ff2bae58-fea8-4a77-b02c-ce577f1ea03b","Dutch politicians’ use of cost–benefit analysis","Mouter, N. (TU Delft Transport and Logistics)","","2017","28 Dutch politicians and 10 top-level civil servants were interviewed about the way Dutch politicians use cost–benefit analysis (CBA). Various types of use were identified. Politicians use CBA: (1) When forming their opinion about the desirability of transport projects; (2) As political ammunition (opportunistic use); (3) To make themselves and their decisions look more rational (symbolic use). None of the politicians stated that they solely base their judgment on CBAs. Politicians mention seven barriers that hamper the use of CBA when forming their opinion: (1) The process of forming an opinion is trivial; (2) Politicians prefer to form their opinion based on conversations rather than on reading reports; (3) Politicians don’t trust CBA’s impartiality; (4) Politicians disagree with normative choices made in CBA. An example of such a normative choice is that CBA attaches an equally large weight to everybody’s utility changes. (5) Politicians think that CBA’s explanatory power is limited; (6) Politicians receive CBAs too late; (7) When there is plenty of money, politicians care less about a project’s social profitability. Members of Parliament identified barriers 3 and 6 as the most important barriers. They regard publishing CBAs one or two months before a debate as the most auspicious solution for rectifying these barriers. An interesting observation is that no barriers for the opportunistic and symbolic use of CBA by politicians were identified. Hence, it can be concluded that it is highly likely that when politicians receive CBAs for transport projects, they will use the CBA in an opportunistic and symbolic way, but politicians will not necessarily use CBA when forming their opinion.","Cost–benefit analysis (CBA); Decision–making process; Knowledge utilization; Transport appraisal; Transport planning","en","journal article","","","","","","","","","","","Transport and Logistics","","",""
"uuid:ea9f5092-adf7-4294-bb9d-9e558be8f598","http://resolver.tudelft.nl/uuid:ea9f5092-adf7-4294-bb9d-9e558be8f598","Mining encrypted software logs using alpha algorithm","Tillem, G. (TU Delft Cyber Security); Erkin, Z. (TU Delft Cyber Security); Lagendijk, R.L. (TU Delft Intelligent Systems)","Obaidat, M.S. (editor); Samarati, P. (editor); Cabello, E. (editor)","2017","The growing complexity of software with respect to technological advances encourages model-based analysis of software systems for validation and verification. Process mining is one recently investigated technique for such analysis which enables the discovery of process models from event logs collected during software execution. However, the usage of logs in process mining can be harmful to the privacy of data owners. While for a software user the existence of sensitive information in logs can be a concern, for a software company, the intellectual property of their product and confidential company information within logs can pose a threat to company's privacy. In this paper, we propose a privacy-preserving protocol for the discovery of process models for software analysis that assures the privacy of users and companies. For this purpose, our proposal uses encrypted logs and processes them using cryptographic protocols in a two-party setting. Furthermore, our proposal applies data packing on the cryptographic protocols to optimize computations by reducing the number of repetitive operations. The experiments show that using data packing the performance of our protocol is promising for privacy-preserving software analysis. To the best of our knowledge, our protocol is the first of its kind for the software analysis which relies on processing of encrypted logs using process mining techniques.","Applied Cryptography; Homomorphic Encryption; Software Privacy; Software Process Mining","en","conference paper","SciTePress","","","","","","","","","Intelligent Systems","Cyber Security","","",""
"uuid:39a87fe9-1aa5-43f3-addf-7535fc659b4d","http://resolver.tudelft.nl/uuid:39a87fe9-1aa5-43f3-addf-7535fc659b4d","High aspect ratio spiral resonators for process variation investigation and MEMS applications","Middelburg, L.M. (TU Delft Electronic Components, Technology and Materials); el Mansouri, B. (TU Delft Electronic Components, Technology and Materials); van Zeijl, H.W. (TU Delft Electronic Components, Technology and Materials); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials); Poelma, René H. (TU Delft Electronic Components, Technology and Materials)","","2017","In this work a method is described to investigate process variations across a wafer. Through wafer MEMS spiral resonators were designed, simulated, fabricated and characterized by measuring the eigenfrequency and corresponding mode shapes. Measuring the eigenfrequency and resulting spectral behavior of resonators on different locations on the wafer was performed by using an optical measurement setup. Two laser beams were used where one is modulated by the periodic movement of the center mass of the resonator. One of the beams is reflected back from the modulated resonator and this beam hits a photo diode. Variations in light intensity due to movement of the resonator is providing a measurement signal correlated to movement. Preliminary measurements showed that measured eigenfrequencies are in correspondence with the simulations within a range of 0-10% deviation.","bulk micromachining; DRIE etching; MEMS; process variations; Resonators","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Electronic Components, Technology and Materials","","",""
"uuid:c87031f8-13f4-4576-8ef7-1888b090db1e","http://resolver.tudelft.nl/uuid:c87031f8-13f4-4576-8ef7-1888b090db1e","Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine","Verkade, J.S. (TU Delft Safety and Security Science; Deltares; Ministerie van Infrastructuur en Milieu); Brown, J. D. (Hydrologic Solutions Limited); Davids, F. (Deltares; Ministerie van Infrastructuur en Milieu); Reggiani, P. (University of Siegen); Weerts, A. H. (Deltares; Wageningen University & Research)","","2017","Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) ‘dressing’ of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) ‘dressing’ of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the ‘total uncertainty’ that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the ‘lumped’ approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific’ approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.","Ensemble dressing; Hydrological forecasting; Predictive uncertainty; Quantile Regression; Statistical post-processing","en","journal article","","","","","","","","","","","Safety and Security Science","","",""
"uuid:a46d4243-739e-488d-b237-8cfbe05cbdc6","http://resolver.tudelft.nl/uuid:a46d4243-739e-488d-b237-8cfbe05cbdc6","Behind the Scenes of Scenario-Based Training: Understanding Scenario Design and Requirements in High-Risk and Uncertain Environments","Noori, Nadia Saad (University of Agder); Wang, Y. (TU Delft Policy Analysis); Comes, M. (University of Agder); Lukosch, H.K. (TU Delft Policy Analysis)","","2017","Simulation exercises as a training tool for enhancing preparedness for emergency response are widely adopted in disaster management. This paper addresses current scenario design processes, proposes an alternative approach for simulation exercises and introduces a conceptual design of an adaptive scenario generator. Our work is based on a systematic literature review and observations made during TRIPLEX-2016 exercise in Farsund, Norway. The planning process and scenario selection of simulation exercises impact directly the effectiveness of intra- and interorganizational cooperation. However, collective learning goals are rarely addressed and most simulations are focused on institution-specific learning goals. Current scenario design processes are often inflexible and begin from scratch for each exercise. In our approach, we address both individual and collective learning goals and the demand to develop scenarios on different layers of organizational learning. Further, we propose a scenario generator that partly automates the scenario selection and adaptively responds to the exercise evolvement.","Humanitarian simulation exercise; scenario design process; collective learning; interorganizational coordination","en","conference paper","","","","","","","","","","","Policy Analysis","","",""
"uuid:f5a44c57-697c-4a4a-8f1e-7b27abd14024","http://resolver.tudelft.nl/uuid:f5a44c57-697c-4a4a-8f1e-7b27abd14024","An innovative approach to overcome saturation and recovery issues of CVD graphene-based gas sensors","Ricciardella, F. (TU Delft Electronic Components, Technology and Materials); Vollebregt, S. (TU Delft Electronic Components, Technology and Materials); Polichetti, Tiziana (ENEA Research Center); Alfano, B. (ENEA UTTP-MDB); Massera, E. (ENEA UTTP-MDB); Sarro, Pasqualina M (TU Delft Electronic Components, Technology and Materials)","","2017","br/>In this work, we present an innovative method which enables to solve fundamental limitations affecting graphene-based chemi-sensors operating under environmental conditions, namely the lack of signal saturation and the scarce recovery after the detection step. The method, which exploits the differential current instead of the current itself, is validated by applying it on different devices having an exposed area equal to 512 pm2. The analysis is performed by adopting nitrogen dioxide (NO2) as target gas in the range from 0.12 ppm to 1.5 ppm. The approach reliability is further confirmed by performing sensing tests towards NO2 with the relative humidity set at two different levels, 30% and 50%.","Graphene-based gas sensors; differential calibration method; NO2; environmental conditions; chemical vapor deposition; transfer-free process","en","conference paper","IEEE","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:bf440872-a40b-4a35-9c28-801590d59799","http://resolver.tudelft.nl/uuid:bf440872-a40b-4a35-9c28-801590d59799","New Frontiers in Analyzing Dynamic Group Interactions: Bridging Social and Computer Science","Lehmann-Willenbrock, Nale (Universiteit van Amsterdam); Hung, H.S. (TU Delft Pattern Recognition and Bioinformatics); Keyton, Joann (University of North Carolina)","","2017","This special issue on advancing interdisciplinary collaboration between computer scientists and social scientists documents the joint results of the international Lorentz workshop, “Interdisciplinary Insights into Group and Team Dynamics,” which took place in Leiden, The Netherlands, July 2016. An equal number of scholars from social and computer science participated in the workshop and contributed to the papers included in this special issue. In this introduction, we first identify interaction dynamics as the core of group and team models and review how scholars in social and computer science have typically approached behavioral interactions in groups and teams. Next, we identify key challenges for interdisciplinary collaboration between social and computer scientists, and we provide an overview of the different articles in this special issue aimed at addressing these challenges.","group and team dynamics; interaction processes; interdisciplinary collaboration; social science and computer science","en","journal article","","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:a812cd9a-dcc7-4406-a0c3-6361328b9d01","http://resolver.tudelft.nl/uuid:a812cd9a-dcc7-4406-a0c3-6361328b9d01","Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided proposals","van der Meulen, F.H. (TU Delft Statistics); Schauer, M.R. (Universiteit Leiden)","","2017","Estimation of parameters of a diffusion based on discrete time observations poses a difficult problem due to the lack of a closed form expression for the likelihood. From a Bayesian computational perspective it can be casted as a missing data problem where the diffusion bridges in between discrete-time observations are missing. The computational problem can then be dealt with using a Markov-chain Monte-Carlo method known as data-augmentation. If unknown parameters appear in the diffusion coefficient, direct implementation of data-augmentation results in a Markov chain that is reducible. Furthermore, data-augmentation requires efficient sampling of diffusion bridges, which can be difficult, especially in the multidimensional case. We present a general framework to deal with with these problems that does not rely on discretisation. The construction generalises previous approaches and sheds light on the assumptions necessary to make these approaches work. We define a random-walk type Metropolis-Hastings sampler for updating diffusion bridges. Our methods are illustrated using guided proposals for sampling diffusion bridges. These are Markov processes obtained by adding a guiding term to the drift of the diffusion. We give general guidelines on the construction of these proposals and introduce a time change and scaling of the guided proposal that reduces discretisation error. Numerical examples demonstrate the performance of our methods.","Data augmentation; Discretisation of path integral; FitzHugh-Nagumo model; Innovation process; Linear process; Multidimensional diffusion bridge; Non-centred parametrisation","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:d6f830b1-d10a-44e7-904a-8c72519d1226","http://resolver.tudelft.nl/uuid:d6f830b1-d10a-44e7-904a-8c72519d1226","Single-Step CMOS Compatible Fabrication of High Aspect Ratio Microchannels Embedded in Silicon","Kluba, M.M. (TU Delft Electronic Components, Technology and Materials); Arslan, A. (Philips Healthcare); Stoute, R. (TNO); Muganda, James (Eindhoven University of Technology); Dekker, R. (TU Delft Electronic Components, Technology and Materials)","","2017","This paper presents a new method for the CMOS compatible fabrication of microchannels integrated into a silicon substrate. In a single-step DRIE process (Deep Reactive Ion Etching) a network of microchannels with High Aspect Ratio (HAR) up to 10, can be etched in a silicon substrate through a mesh mask. In the same single etching step, multidimensional microchannels with various dimensions (width, length, and depth) can be obtained by tuning the process and design parameters. These fully embedded structures enable further wafer processing and integration of electronic components like sensors and actuators in wafers with microchannels.","embedded microchannel; HAR; mesh mask; single-step DRIE (Bosch process)","en","conference paper","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:6f486e26-8af9-41bf-92b6-4a176d426ffe","http://resolver.tudelft.nl/uuid:6f486e26-8af9-41bf-92b6-4a176d426ffe","Limit theorems for the zig-zag process","Bierkens, G.N.J.C. (TU Delft Statistics); Duncan, Andrew (Imperial College London)","","2017","Markov chain Monte Carlo (MCMC) methods provide an essential tool in statistics for sampling from complex probability distributions. While the standard approach to MCMC involves constructing discrete-time reversible Markov chains whose transition kernel is obtained via the Metropolis-Hastings algorithm, there has been recent interest in alternative schemes based on piecewise deterministic Markov processes (PDMPs). One such approach is based on the zig-zag process, introduced in Bierkens and Roberts (2016), which proved to provide a highly scalable sampling scheme for sampling in the big data regime; see Bierkens et al. (2016). In this paper we study the performance of the zig-zag sampler, focusing on the one-dimensional case. In particular, we identify conditions under which a central limit theorem holds and characterise the asymptotic variance. Moreover, we study the influence of the switching rate on the diffusivity of the zig-zag process by identifying a diffusion limit as the switching rate tends to. Based on our results we compare the performance of the zig-zag sampler to existing Monte Carlo methods, both analytically and through simulations.","central limit theorem; continuous-time Markov process; functional central limit theorem; MCMC; nonreversible Markov process; piecewise deterministic Markov process","en","journal article","","","","","","Accepted author manuscript","","2018-03-08","","","Statistics","","",""
"uuid:ecfd3959-35ff-415e-b43c-605610d219a5","http://resolver.tudelft.nl/uuid:ecfd3959-35ff-415e-b43c-605610d219a5","A piecewise deterministic scaling limit of lifted Metropolis-Hastings in the Curie-Weiss model","Bierkens, G.N.J.C. (TU Delft Statistics); Roberts, Gareth (University of Warwick)","","2017","In Turitsyn, Chertkov and Vucelja [Phys. D 240 (2011) 410-414] a nonreversible Markov Chain Monte Carlo (MCMC) method on an augmented state space was introduced, here referred to as Lifted Metropolis-Hastings (LMH). A scaling limit of the magnetization process in the Curie-Weiss model is derived for LMH, as well as for Metropolis-Hastings (MH). The required jump rate in the high (supercritical) temperature regime equals n1/2 for LMH, which should be compared to n for MH. At the critical temperature, the required jump rate equals n3/4 for LMH and n3/2 for MH, in agreement with experimental results of Turitsyn, Chertkov and Vucelja (2011). The scaling limit of LMH turns out to be a nonreversible piecewise deterministic exponentially ergodic ""zig-zag"" Markov process.","Exponential ergodicity; Markov chain Monte Carlo; Phase transition; Piecewise deterministic Markov process; Weak convergence","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:8722f716-8570-4656-beaa-14f3dc4072ef","http://resolver.tudelft.nl/uuid:8722f716-8570-4656-beaa-14f3dc4072ef","Bayesian estimation of incompletely observed diffusions","van der Meulen, F.H. (TU Delft Statistics); Schauer, M.R. (Universiteit Leiden)","","2017","We present a general framework for Bayesian estimation of incompletely observed multivariate diffusion processes. Observations are assumed to be discrete in time, noisy and incomplete. We assume the drift and diffusion coefficient depend on an unknown parameter. A data-augmentation algorithm for drawing from the posterior distribution is presented which is based on simulating diffusion bridges conditional on a noisy incomplete observation at an intermediate time. The dynamics of such filtered bridges are derived and it is shown how these can be simulated using a generalised version of the guided proposals introduced in Schauer, Van der Meulen and Van Zanten (2017, Bernoulli 23(4A)).","Data augmentation; enlargement of filtration; filtered bridge; guided proposal; innovation scheme; Metropolis–Hastings; multidimensional diffusion bridge; partially observed diffusion; smoothing diffusion processes","en","journal article","","","","","","","","","","","Statistics","","",""
"uuid:37e8d1c2-8e24-4a2b-9b7f-d833f6679af5","http://resolver.tudelft.nl/uuid:37e8d1c2-8e24-4a2b-9b7f-d833f6679af5","Monolithically Integrated Light Feedback Control Circuit for Blue/UV LED Smart Package","Koladouz Esfahani, Z. (TU Delft Electronic Components, Technology and Materials); Tohidian, M. (TU Delft Electronics); van Zeijl, H.W. (TU Delft Electronic Components, Technology and Materials); Kolahdouz, Mohammadreza (University of Tehran); Zhang, Kouchi (TU Delft Electronic Components, Technology and Materials)","","2017","Given the performance decay of high-power light-emitting diode (LED) chips over time and package condition changes, having a reliable output light for sensitive applications is a point of concern. In this study, a light feedback control circuit, including blue-selective photodiodes, for blue/ultraviolet (UV) LED, has been designed and implemented using a low-cost seven-mask BiCMOS process. The feedback circuit was monolithically integrated in a package with four high-power blue LED chips. For sensing the intensity of exact colored blue/UV light in the package, selective photodiodes at 480-nm wavelength were implemented. An opamp-based feedback circuit combined with a high-power transistor controls the output light based on real-time sensor data. The whole system is a low-cost integrated package that guarantees a stable and reliable output light under different working conditions. Output light can be also controlled linearly by a reference input voltage.","BiCMOS process; Blue/ultraviolet (UV) light-emitting diode (LED); feedback control; high current CMOS transistor; photodetector; wafer level LED package.","en","journal article","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:5d6c967e-e4b3-4a65-80ec-53d191e3da03","http://resolver.tudelft.nl/uuid:5d6c967e-e4b3-4a65-80ec-53d191e3da03","Bounding the probability of resource constraint violations in multi-agent MDPs","de Nijs, F. (TU Delft Algorithmics); Walraven, E.M.P. (TU Delft Algorithmics); de Weerdt, M.M. (TU Delft Algorithmics); Spaan, M.T.J. (TU Delft Algorithmics)","","2017","Multi-agent planning problems with constraints on global resource consumption occur in several domains. Existing algorithms for solving Multi-agent Markov Decision Processes can compute policies that meet a resource constraint in expectation, but these policies provide no guarantees on the probability that a resource constraint violation will occur. We derive a method to bound constraint violation probabilities using Hoeffding's inequality. This method is applied to two existing approaches for computing policies satisfying constraints: the Constrained MDP framework and a Column Generation approach. We also introduce an algorithm to adaptively relax the bound up to a given maximum violation tolerance. Experiments on a hard toy problem show that the resulting policies outperform static optimal resource allocations to an arbitrary level. By testing the algorithms on more realistic planning domains from the literature, we demonstrate that the adaptive bound is able to efficiently trade off violation probability with expected value, outperforming state-of-the-art planners.","Markov Decision Process; Resource constraints; Planning under uncertainty","en","conference paper","American Association for Artificial Intelligence (AAAI)","","","","","","","","","","Algorithmics","","",""
"uuid:d7d901ae-38c4-4308-8a90-a10fbb50a791","http://resolver.tudelft.nl/uuid:d7d901ae-38c4-4308-8a90-a10fbb50a791","The influence of team factors and team processes on game based learning in student teams","Kurapati, S. (TU Delft Policy Analysis); Lukosch, H.K. (TU Delft Policy Analysis); Freese, M. (TU Delft Policy Analysis); Verbraeck, A. (TU Delft Policy Analysis)","","2017","The significance of teams, teamwork and team performance is unprecedented in many learning environments of institutes in higher education and organizations. While individual and team tasks are quite straightforward to define, teamwork is a set of interrelated cognitions, attitudes and behaviours that contribute to the dynamic processes leading to team performance. To address the research gaps related to team processes and team factors related to game based learning, we conducted a quasi-experimental gaming session using a multi-player game called Yard Crane Scheduler 3. Our analysis of the game session showed that mutual performance monitoring had a significant positive effect on team task performance, while mutual support between team members had a negative effect on the team task performance. Shared mental models and closed loop communication were important for the team task performance but the development of shared mental models through shared displays and the effectiveness of closed loop communication were hindered by time pressure related to the team task. Our findings indicate that knowledge of team factors and team processes that affect team performance can help instructors to design team tasks and evaluate students in an efficient and holistic manner.","Board games; Simulation games; Team factors; Team processes; Teams","en","conference paper","Academic Conferences","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","2018-08-01","","","Policy Analysis","","",""
"uuid:76ccfb2d-0e95-44f2-9705-7d65de752446","http://resolver.tudelft.nl/uuid:76ccfb2d-0e95-44f2-9705-7d65de752446","Optimal control of EGR system in gasoline engine based on Gaussian process","Zarghami, M. (Ecole de Technologie Superieure (ETS)); Hassan HosseinNia, S. (TU Delft Mechatronic Systems Design); Babazadeh, M. (University of Zanjan)","","2017","The contribution described in this paper is concentrated on the integration of exhaust gas recirculation (EGR) system into the process of combustion in an optimal manner. In practice, deriving a state-space model of this actuator is an energetic task as a result of involving some uncertain chemical reactions. To alleviate the effect of unobserved phenomena, which does not seem to be easy in modeling, an improved Gaussian Process (GP) is represented for identifying such dynamics. In this approach, practical modification in general formulation of GP is provided based on proportional feedback gain adjustment. Afterwards, the obtained model is considered for design of optimal model-based control strategy. The whole aim is focused on achieving a green economically gasoline engine by optimizing the trend of fuel consumption. Eventually, simulation results illustrate the effectiveness of proposed structure in EGR systems.","Automotive engineering; Diesel engines; Fractional control; EGR valve; Gaussian Process","en","journal article","","","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:513d01de-e46d-4a47-b965-ae2c268279b9","http://resolver.tudelft.nl/uuid:513d01de-e46d-4a47-b965-ae2c268279b9","Low Temperature CVD Grown Graphene for Highly Selective Gas Sensors Working under Ambient Conditions","Ricciardella, F. (TU Delft Electronic Components, Technology and Materials); Vollebregt, S. (TU Delft Electronic Components, Technology and Materials); Polichetti, T. (ENEA Research Center); Alfano, B. (ENEA UTTP-MDB); Massera, E. (ENEA UTTP-MDB); Sarro, Pasqualina M (TU Delft Electronic Components, Technology and Materials)","","2017","In this paper we report on gas sensors based on graphene grown by Chemical Vapor Deposition at 850 °C. Mo was used as catalyst for graphene nucleation. Resistors were directly designed on pre-patterned Mo using the transfer-free process we recently developed, thus avoiding films damage during the transfer to the target substrate. Devices operating at room temperature and relative humidity set at 50% were tested towards NO2. The sensors resulted to be highly specific towards NO2 and showed current variation up to 6%. The performances were compared with those of gas sensors based on graphene grown at 980 °C, which represents the usual growth temperature for such material. The findings show that by lowering the graphene growth temperature and consequently the energy consumptions the sensing benefits of these devices are still preserved.","gas sensors; NO2; environmental conditions; graphene; chemical vapor deposition; transfer-free process","en","conference paper","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:564c6f2d-b714-489f-9a87-01573665e47f","http://resolver.tudelft.nl/uuid:564c6f2d-b714-489f-9a87-01573665e47f","Stability, local structure and electronic properties of borane radicals on the Si(100) 2x1:H surface: A first-principles study","Fang, C. (TU Delft Electronic Instrumentation; TU Delft QN/High Resolution Electron Microscopy; Brunel University); Mohammadi, V. (TU Delft Electronic Instrumentation); Nihtianova, S. (TU Delft Electronic Instrumentation); Sluiter, M.H.F. (TU Delft (OLD) MSE-7)","","2017","Abstract Deposition of a thin B layer via decomposition of B2H6 on Si (PureB process) produces B-Si junctions which exhibit unique electronic and optical properties. Here we present the results of our systematic first-principles study of BHn (n=0-3) radicals on Si(100)2x1:H surfaces, the initial stage of the PureB process. The calculations reveal an unexpectedly high stability of BH2 and BH3 radicals on the surface and a plausible atomic exchange mechanism of surface Si atoms with B atoms from absorbed BHn radicals. The calculations show strong local structural relaxation and reconstructions, as well as strong chemical bonding between the surface Si and the BHn radicals. Electronic structure calculations show various defect states in the energy gap of Si due to the BHn absorption. These results shed light on the initial stages of the complicated PureB process and also rationalize the unusual electronic, optical and electrical properties of the deposited Si surfaces.","Borane deposition; H passivated Si(001) surface; PureB process; Ab initio calculations","en","journal article","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:291b50ec-73d4-474e-b985-4c7558919d4b","http://resolver.tudelft.nl/uuid:291b50ec-73d4-474e-b985-4c7558919d4b","Advanced signal processing techniques for fibre-optic structural health monitoring","Groves, R.M. (TU Delft Structural Integrity & Composites)","","2017","Fibre optic sensors can measure a range of physics and chemical parameters. Some of the more common fibre optic sensors are the fibre Bragg grating (FBG), the long period grating (LPG), the Fabry-Pérot Interferometer (FPI) and various distributed fibre optic sensors based on optical time-domain reflectometry (OTDR) and optical frequency domain reflectometry (OFDR). Each of these sensor types utilises different interrogator hardware and signal processing software. The goals of this research are to develop new algorithms for multi-parameter sensing and to improve the sensitivity and resolution of fibre optic sensing by developing new approaches. This is done by stepping back from current algorithms, and considering what additional information is expected to be present in and can be extracted from the signal. Recent publications have shown that advanced signal processing techniques can be used for bend sensing, for damage type classification and to improve the spatial resolution of the sensing. Structural health monitoring requires the measurement of different structural parameters to determine the health of a structure. A commonly used definition of structural health monitoring is “SHM is the integration of sensing and possibly also actuation devices to allow the loading and damaging conditions of a structure to be recorded, analysed, localized, and predicted in a way that non-destructive testing (NDT) becomes an integral part of the structure and a material”. From this definition four levels of structural heath monitoring are defined: (1) mechanical and environmental load monitoring, (2) identification and location of damage, (3) damage quantification, and (4) prognosis of residual life. The paper will explore how advanced signal processing techniques can drive the development of multi-parameter sensing with fibre optics, and can lead to the goal of integrated fibre optic sensing system for structural health monitoring applications.","fibre optic sensors; signal processing; structural health monitoring; aerospace","en","conference paper","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:af42a935-e566-4fd9-bc5e-727377d6a68a","http://resolver.tudelft.nl/uuid:af42a935-e566-4fd9-bc5e-727377d6a68a","Dynamic Airline Booking Forecasting","van Ostaijen, Thom; Santos, Bruno F. (TU Delft Air Transport & Operations); Mitici, M.A. (TU Delft Air Transport & Operations)","","2017","This paper proposes a model for dynamic booking forecasting using a time-inhomogeneous Markov process. The transition probabilities are estimated based on a combination of an empirical and a parametric distribution. This model is applied for flight booking forecasting, where flight forecasts are updated on a daily basis over a time horizon of up to 300 days before the day of departure. The distribution of flight bookings over this time horizon, as well as the expected average flight bookings are determined. Historical data of two years of flights is used in our numerical analysis. The performance of our model is compared with two classical forecasting methods: the additive pickup method and the historical average. We show that our proposed model is up to 8% more accurate than the two classical methods mentioned above. Moreover, by determining the distribution of the flight bookings over a horizon of 300 days before departure, we provide additional information about the uncertainty around the flight
bookings.","Airline Booking Forecasting; Markov Processes","en","conference paper","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:ea077a7d-8460-4b04-95b0-b40a8a4dc9b3","http://resolver.tudelft.nl/uuid:ea077a7d-8460-4b04-95b0-b40a8a4dc9b3","Architectural design education: in varietate unitas","van Dooren, E.J.G.C. (TU Delft Architectural Engineering); van Merriënboer, J. (Universiteit Maastricht); Boshuizen, H.P.A. (Open University of the Netherlands); van Dorst, M.J. (TU Delft Urbanism); Asselbergs, M.F. (TU Delft Architectural Engineering)","","2017","A fascinating and rich landscape of personal views and approaches can be seen in architectural design and in architectural design education. This variation may be confusing for students. This paper focuses on the question: is the framework of generic elements that we developed for explicating the design process helpful to compare the differences in architectural design approaches? The results of interviewing a variety of 15 architectural, urban and landscape designers show all kinds of personal approaches that have a set of five underlying generic elements in common. Therefore, the framework may be helpful for teachers and students to describe these personal approaches and may help students in understanding differences and similarities and in finding out what their own personal approach may be.","Architectural design; Design education; Design methods; Design process; Design strategy","en","journal article","","","","","","","","","","Urbanism","Architectural Engineering","","",""
"uuid:649af14f-59da-4007-a049-ae30127b24d0","http://resolver.tudelft.nl/uuid:649af14f-59da-4007-a049-ae30127b24d0","Integration of Energy and Material Performance of Buildings: I=E+M","Alsema, Erik (W/E Consultants Sustainable Building); Anink, David (W/E Consultants Sustainable Building); Meijer, A. (TU Delft OLD Housing Quality and Process Innovation); Straub, A. (TU Delft OLD Housing Quality and Process Innovation); Donze, Geurt (W/E Consultants Sustainable Building); van Hulten, Saskia (W/E Consultants Sustainable Building)","","2017","Sustainable development has been the focus of all major industries in the world, especially in the construction industry. As one of the sustainable construction modes, housing industrialization (HI) is now absorbing a growing number of attentions that lead the industry to go green. However, the implementation of HI in China is far from satisfactory due to its low economic efficiency. This paper attempts to improve the HI supply chain from a new perspective-transaction costs (TCs). First, it provides an objective understanding of status quo of HI in particular in China. Then, the study outlines the basis of TCs theories and supply chain management theory, compiling literature review of the application of TCs and supply chain management in other fields to states the feasibility of
their application in HI area. A theoretical framework is developed to explain the relationships and overlaps among these three areas. Analysis of the state of research in application of TCs in HI supply chain management is expected to help optimized the governance structure of HI supply chain.","green rating tool; design process; policy and regulation","en","conference paper","Construction Industry Council","","","","","","","","","","OLD Housing Quality and Process Innovation","","",""
"uuid:1982c5d6-b851-4de2-be00-0eab75ac0ee0","http://resolver.tudelft.nl/uuid:1982c5d6-b851-4de2-be00-0eab75ac0ee0","A Fully Integrated Discrete-Time Superheterodyne Receiver","Tohidian, M. (TU Delft Electronics); Madadi, I. (TU Delft Electronics); Staszewski, R.B. (TU Delft Electronics)","","2017","The zero/low intermediate frequency (IF) receiver (RX) architecture has enabled full CMOS integration. As the technology scales and wireless standards become ever more challenging, the issues related to time-varying dc offsets, the second-order nonlinearity, and flicker noise become more critical. In this paper, we propose a new architecture of a superheterodyne RX that attempts to avoid such issues. By exploiting discrete-time (DT) operation and using only switches, capacitors, and inverter-based gm-stages as building blocks, the architecture becomes amenable to further scaling. Full integration is achieved by employing a cascade of four complex-valued passive switched-cap-based bandpass filters sampled at 4× of the local oscillator rate that perform IF image rejection. Channel selection is achieved through an equivalent of the seventh-order filtering. A new twofold noise-canceling low-noise transconductance amplifier is proposed. Frequency domain analysis of the RX is presented by the proposed DT model. The RX is wideband and covers 0.4-2.9 GHz with a noise figure of 2.9-4 dB. It is implemented in 65-nm CMOS and consumes 48-79 mW.","Bandpass filters (BPF); Mixers; RF signals; Radio frequency; Receivers; Switches; Time-domain analysis; Bandpass filter (BPF); IIP2; discrete time (DT); process scalable; receiver (RX); superheterodyne; switched capacitor","en","journal article","","","","","","","","","","","Electronics","","",""
"uuid:d1a15ec0-7b9a-4f8f-ad47-b5c34a6e75aa","http://resolver.tudelft.nl/uuid:d1a15ec0-7b9a-4f8f-ad47-b5c34a6e75aa","CVD transfer-free graphene for sensing applications","Schiattarella, Chiara (Università degli Studi di Napoli Federico II); Vollebregt, S. (TU Delft Electronic Components, Technology and Materials); Polichetti, Tiziana (ENEA Research Center); Alfano, Brigida (ENEA Research Center); Massera, Ettore (ENEA Research Center); Miglietta, Maria L. (ENEA Research Center); Di Francia, Girolamo (ENEA Research Center); Sarro, Pasqualina M (TU Delft Electronic Components, Technology and Materials)","","2017","The sp2 carbon-based allotropes have been extensively exploited for the realization of gas sensors in the recent years because of their high conductivity and large specific surface area. A study on graphene that was synthetized by means of a novel transfer-free fabrication approach and is employed as sensing material is herein presented. Multilayer graphene was deposited by chemical vapour deposition (CVD) mediated by CMOS-compatible Mo. The utilized technique takes advantage of the absence of damage or contamination of the synthesized graphene, because there is no need for the transfer onto a substrate. Moreover, a proper pre-patterning of the Mo catalyst allows one to obtain graphene films with different shapes and dimensions. The sensing properties of the material have been investigated by exposing the devices to NO2, NH3 and CO, which have been selected because they are wellknown hazardous substances. The concentration ranges have been chosen according to the conventional monitoring of these gases. The measurements have been carried out in humid N2 environment, setting the flow rate at 500 sccm, the temperature at 25 °C and the relative humidity (RH) at 50%. An increase of the conductance response has been recorded upon exposure towards NO2, whereas a decrease of the signal has been detected towards NH3. The material appears totally insensitive towards CO. Finally, the sensing selectivity has been proven by evaluating and comparing the degree of adsorption and the interaction energies for NO2 and NH3 on graphene. The direct-growth approach for the synthesis of graphene opens a promising path towards diverse applicative scenarios, including the straightforward integration in electronic devices.","Ammonia; Chemiresistors; CMOS-compatible process; Graphene; Nitrogen dioxide; Transfer-free growth","en","journal article","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:ae28c34f-4226-48ad-b247-69f8278f43e2","http://resolver.tudelft.nl/uuid:ae28c34f-4226-48ad-b247-69f8278f43e2","Evaluation methodology for tariff design under escalating penetrations of distributed energy resources","Abdelmotteleb, I.I.A. (TU Delft Energie and Industrie; Institute for Research in Technology (IIT); Comillas Pontifical University); Gómez, Tomás (Comillas Pontifical University); Reneses, Javier (Comillas Pontifical University)","","2017","As the penetration of distributed energy resources (DERs) escalates in distribution networks, new network tariffs are needed to cope with this new situation. These tariffs should allocate network costs to users, promoting an efficient use of the distribution network. This paper proposes a methodology to evaluate and compare network tariff designs. Four design attributes are proposed for this aim: (i) network cost recovery; (ii) deferral of network reinforcements; (iii) efficient consumer response; (iv) recognition of side-effects on consumers. Through an analytical hierarchy process (AHP), the evaluation methodology is applied to compare traditional cost allocation methods, on the basis of 100% energy, 100% demand, and 50% energy-50% demand, with more advanced pricing methods based on distribution locational marginal prices in combination with cost-reflective network charges. Numerical results are obtained through a case study based on the IEEE 34-node test feeder with DER integration. The results illustrate the advantage of advanced pricing methods to promote an efficient integration of DER and demand price-response from consumers.","Analytical hierarchy process; Distributed energy resources; Distribution locational marginal prices; Distribution network tariff; Peak-coincidence network charges","en","journal article","","","","","","","","","","","Energie and Industrie","","",""
"uuid:07c2bbfb-da4b-4311-b4eb-787e6e0bd937","http://resolver.tudelft.nl/uuid:07c2bbfb-da4b-4311-b4eb-787e6e0bd937","A Monte Carlo approach to the ship-centric Markov decision process for analyzing decisions over converting a containership to LNG power","Kana, A.A. (TU Delft Ship Design, Production and Operations); Harrison, B.M. (University of Michigan)","","2017","A Monte Carlo approach to the ship-centric Markov decision process (SC-MDP) is presented for analyzing whether a container ship should convert to LNG power in the face of evolving Emission Control Area regulations. The SC-MDP model was originally developed as a means to analyze uncertain, sequential decision making problems. However, the original model is limited in its handling of uncertainty by only using discrete probabilistic values to account for the uncertainty. This paper extends the model to include Monte Carlo simulations to gain a deeper understanding of how uncertainty affects decision making behavior. A case study is presented involving the impact of evolving Emission Control Areas on the design and operation of a notional 13,000 TEU container ship. The decision of whether to invest in a dual fuel LNG engine is analyzed given uncertainties in economic parameters, regulatory scenarios, and supply chain risks. The case study is used to show how variations in uncertain parameters can have a drastic effect on optimal decision strategies.","Decision making; Emission Control Area; LNG; Markov decision process; Monte Carlo simulation; Uncertainty analysis","en","journal article","","","","","","","","2019-01-15","","","Ship Design, Production and Operations","","",""
"uuid:2371e789-90d8-42aa-909d-546406fae24a","http://resolver.tudelft.nl/uuid:2371e789-90d8-42aa-909d-546406fae24a","Epidemic mitigation via awareness propagation in communication networks: The role of time scales","Wang, H. (TU Delft Multimedia Computing); Chen, Chuyi (External organisation); Qu, B. (TU Delft Multimedia Computing); Li, Daqing (Beihang University)","","2017","The participation of individuals in multi-layer networks allows for feedback between network layers, opening new possibilities to mitigate epidemic spreading. For instance, the spread of a biological disease such as Ebola in a physical contact network may trigger the propagation of the information related to this disease in a communication network, e.g. an online social network. The information propagated in the communication network may increase the awareness of some individuals, resulting in them avoiding contact with their infected neighbors in the physical contact network, which might protect the population from the infection. In this work, we aim to understand how the time scale γ of the information propagation (speed that information is spread and forgotten) in the communication network relative to that of the epidemic spread (speed that an epidemic is spread and cured) in the physical contact network influences such mitigation using awareness information. We begin by proposing a model of the interaction between information propagation and epidemic spread, taking into account the relative time scale γ. We analytically derive the average fraction of infected nodes in the meta-stable state for this model (i) by developing an individual-based mean-field approximation (IBMFA) method and (ii) by extending the microscopic Markov chain approach (MMCA). We show that when the time scale γ of the information spread relative to the epidemic spread is large, our IBMFA approximation is better compared to MMCA near the epidemic threshold, whereas MMCA performs better when the prevalence of the epidemic is high. Furthermore, we find that an optimal mitigation exists that leads to a minimal fraction of infected nodes. The optimal mitigation is achieved at a non-trivial relative time scale γ, which depends on the rate at which an infected individual becomes aware. Contrary to our intuition, information spread too fast in the communication network could reduce the mitigation effect. Finally, our finding has been validated in the real-world two-layer network obtained from the location-based social network Brightkite.","epidemic mitigation; epidemic spreading; interacting processes; multi-layer networks; time scale","en","journal article","","","","","","","","","","","Multimedia Computing","","",""
"uuid:3767b56d-2610-4e6c-b157-d2bdb2d811a5","http://resolver.tudelft.nl/uuid:3767b56d-2610-4e6c-b157-d2bdb2d811a5","Ultra-Stretchable Interconnects for High-Density Stretchable Electronics","Shafqat, Salman (Eindhoven University of Technology); Hoefnagels, Johan P.M. (Eindhoven University of Technology); Savov, A.M. (TU Delft Electronic Components, Technology and Materials); Joshi, S. (TU Delft Electronic Components, Technology and Materials); Dekker, R. (TU Delft Electronic Components, Technology and Materials; Philips Research); Geers, Marc G.D. (Eindhoven University of Technology)","","2017","The exciting field of stretchable electronics (SE) promises numerous novel applications, particularly in-body and medical diagnostics devices. However, future advanced SE miniature devices will require high-density, extremely stretchable interconnects with micron-scale footprints, which calls for proven standardized (complementary metal-oxide semiconductor (CMOS)-type) process recipes using bulk integrated circuit (IC) microfabrication tools and fine-pitch photolithography patterning. Here, we address this combined challenge of microfabrication with extreme stretchability for high-density SE devices by introducing CMOS-enabled, free-standing, miniaturized interconnect structures that fully exploit their 3D kinematic freedom through an interplay of buckling, torsion, and bending to maximize stretchability. Integration with standard CMOS-type batch processing is assured by utilizing the Flex-to-Rigid (F2R) post-processing technology to make the back-end-of-line interconnect structures free-standing, thus enabling the routine microfabrication of highly-stretchable interconnects. The performance and reproducibility of these free-standing structures is promising: an elastic stretch beyond 2000% and ultimate (plastic) stretch beyond 3000%, with <0.3% resistance change, and >10 million cycles at 1000% stretch with <1% resistance change. This generic technology provides a new route to exciting highly-stretchable miniature devices.","Complementary metal-oxide semiconductor (CMOS) processing; Mechanical size-effects; Miniaturized interconnects; Stretchable electronics; Ultra-stretchability","en","journal article","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:0c3909f3-e057-43de-987b-a6cda8bd96a2","http://resolver.tudelft.nl/uuid:0c3909f3-e057-43de-987b-a6cda8bd96a2","Fast ℓ1-regularized space-Time adaptive processing using alternating direction method of multipliers","Qin, Lilong (National University of Defense Technology); Wu, Manqing (China Electronics Technology Group Corporation); Wang, X. (TU Delft Microwave Sensing, Signals & Systems); Dong, Zhen (National University of Defense Technology)","","2017","Motivated by the sparsity of filter coefficients in full-dimension space-Time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-To-clutter-noise ratio performance than other algorithms.","alternating direction method of multipliers; generalized side-lobe canceler; recursive least-squares; space-Time adaptive processing; sparse representation","en","journal article","","","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:0262d624-296d-4128-90e3-7db4365aa234","http://resolver.tudelft.nl/uuid:0262d624-296d-4128-90e3-7db4365aa234","Towards high-quality multi-spot-welded joints in thermoplastic composite structures","Zhao, T. (TU Delft Structural Integrity & Composites); Palardy, G. (TU Delft Structural Integrity & Composites); Villegas, I.F. (TU Delft Structural Integrity & Composites); Benedictus, R. (TU Delft Aerospace Structures & Materials)","","2017","Ultrasonic welding is a promising assembly technique for thermoplastic composites and it is well-suited for spot welding. In this study, two typical welding process control strategies, i.e. displacement-controlled and energy-controlled welding, were selected to manufacture spot-welded joints. The influence of different boundary conditions, provided by different welding jigs, on the welding process and the performance of thus-created welded joints were investigated. The optimum input energy was found dependent on the welding jigs, while the optimum displacement was consistent for achieving the maximum weld strength in both welding jigs. Therefore, displacement-controlled welding showed more potential in producing consistent welds in sequential multi-spot welding.","Fracture analysis; Mechanical behaviour; Thermoplastic composites; Ultrasonic spot welding; Welding process control","en","conference paper","International Committee on Composite Materials","","","","","","","","","Aerospace Structures & Materials","Structural Integrity & Composites","","",""
"uuid:efb45c8d-e8a2-4840-8aa5-a6fd5f4999f0","http://resolver.tudelft.nl/uuid:efb45c8d-e8a2-4840-8aa5-a6fd5f4999f0","Toward Bio-based geo- & Civil Engineering for a Sustainable Society","Jonkers, H.M. (TU Delft Materials and Environment)","","2017","The since 2010 running research program 'Bio-Based Geo & Civil Engineering for a Sustainable Society (BioGeoCivil)', funded by the Dutch technology foundation STW, aims to develop novel bio-based construction materials that can be used in Civil- and Geo-engineering constructions to enhance the sustainability performance of the sector. Rationale is that the sector produces still today excess amounts of waste in all life cycle phases of a construction, from building to use phase as well as end-of-life phase. Aim of the program is to mimic nature as 'building' processes in nature do not produce any waste as all elements, also residual material. is considered a high grade resource. In order to substantially improve the sustainability profile of the sector, upgrading of secondary- or byproducts must be achieved to allow functional performance similar to primary materials and resources. The challenge of the six currently running projects within the BioGeoCivil program is therefore not only to mimic nature but also to include bio-based materials or processes in civil- or geo-engineering applications which result, in comparison to traditional building products, in drastically improved performance both on sustainability and durability level. The six projects comprise: 1. Fungal biofilms (coating) for wood protection, 2. Bacteria-based repair and performance improvements of aged concrete structures, 3. Bacteria-based ground stabilization to mitigate liquefaction and piping of granular sediments, 4. Engineering of bacterial biofilms on buildings and infrastructure as a basis for natural protection, 5. Lift up Lowlands: upgrading of natural materials (bio-remediation of sludge) for sustainable lift up of low lying polder areas, and 6. Towards the development of carbon dioxide neutral renewable cement.","Bio-based processes; cement; civil- and geo-engineering; concrete; soil","en","journal article","","","","","","","","","","","Materials and Environment","","",""
"uuid:0e0d1515-205b-402a-b8b3-1e3e755e0872","http://resolver.tudelft.nl/uuid:0e0d1515-205b-402a-b8b3-1e3e755e0872","Personalized design process for persuasive technologies","van Dooren, M.M.M. (TU Delft Design Aesthetics); Visch, V.T. (TU Delft Design Aesthetics); Spijkerman, Renske (Parnassia Addiction Research Centre)","Orji, R. (editor); Reisinger, M. (editor); Busch, M. (editor); Dijkstra, A. (editor); Kaptein, M. (editor); Mattheiss, E. (editor)","2017","In this position paper we discuss the application of personalization in persuasive technology design in light of the Personalized Design Process model (PDP-model). The PDP-model defines personalization as aligning a persuasive product to the end-user by stakeholder involvement (i.e. designers, endusers, domain experts and family/relatives) across the Problem Definition-, the Product Design- and/or the Tailoring design phases. It is expected that personalization in a PDP enhances the motivation of end-users to interact longer and more frequently with a product, increasing the likelihood that the product will reach its aimed-for effect. Although personalization in a PDP is a common method in persuasive product design, its added value has not been sufficiently validated by scientific research. We propose several reasons for the frequent use of personalization in a PDP, despite the lack of evidence for its added value. Furthermore, we discuss how personalization could be validated according to the PDP-model.","Design; Personalized; Persuasive technology; Process; Tailoring","en","conference paper","CEUR-WS","","","","","","","","","","Design Aesthetics","","",""
"uuid:40504aa2-6b4a-4dad-b1db-42289b08e5bb","http://resolver.tudelft.nl/uuid:40504aa2-6b4a-4dad-b1db-42289b08e5bb","Radio astronomical image formation using constrained least squares and Krylov subspaces","Mouri Sardarabadi, A.; Leshem, A.; Van der Veen, A.J.","","2016","Aims. Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods. In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical “dirty image” is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results. Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources and producing highly accurate intensity estimates.","interferometeres; numerical method; image processing","en","journal article","EDP Sciences","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:c351e276-e7e2-4153-98e6-bea6882cfb30","http://resolver.tudelft.nl/uuid:c351e276-e7e2-4153-98e6-bea6882cfb30","Concrete in dynamic tension: The fracture process","Vegt, I. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Weerheijm, J. (copromotor); Delft University of Technology (degree granting institution)","2016","The fracture properties of concrete are rate dependent. In this thesis the results on tensile tests at static, moderate and high loading rate are presented. The results show the influence of the loading rate not only the tensile strength, but also on the fracture energy, stress-defromation relation and fracture parameters, like fracture lengths and width of the fracture zone. The failure mechanisms are reconstructed and the dominant mechanisms behind the rate dependency are identified. By using basic principles of fracture mechanics and a simple model based on the Stefan effect, the loading rates at which the mechanisms have significant effect have been determined. The dominant mechanisms found in the research can be implemented in dynamic models and the acquired data set can be used to validate numerical models.","Fracture process; Dynamic loading; Concrete","en","doctoral thesis","","978-94-6186-747-6","","","","","","","","","Materials and Environment","","",""
"uuid:86a7f178-fe77-4607-9cfe-77f1bc3b90d8","http://resolver.tudelft.nl/uuid:86a7f178-fe77-4607-9cfe-77f1bc3b90d8","On the duality of globally constrained separable problems and its application to distributed signal processing","Sherson, T.W. (TU Delft Signal Processing Systems); Heusdens, R. (TU Delft Signal Processing Systems); Kleijn, W.B. (TU Delft Signal Processing Systems; Victoria University of Wellington)","","2016","In this paper, we focus on the challenge of processing data generated within decentralised wireless sensor networks in a distributed manner. When the desired operations can be expressed as globally constrained separable convex optimisation problems, we show how we can convert these to extended monotropic programs and exploit Lagrangian duality to form equivalent distributed consensus problems. Such problems can be embedded in sensor network applications via existing solvers such as the alternating direction method of multipliers or the primal dual method of multipliers. We then demonstrate how this approach can be used to solve specific problems including linearly constrained quadratic problems and the classic Gaussian channel capacity maximisation problem in a distributed manner.","extended monotropic programs; Wireless sensor networks; distributed signal processing; Lagrangian duality","en","conference paper","IEEE","","","","","","","","","","Signal Processing Systems","","",""
"uuid:29afd4c5-2274-47d9-9add-c9dceba18076","http://resolver.tudelft.nl/uuid:29afd4c5-2274-47d9-9add-c9dceba18076","Autotrophic Nitrogen Removal from Low Concentrated Effluents: Study of system configurations and operational features for post-treatment of anaerobic effluents","Sanchez Guilen, J.A. (TU Delft Sanitary Engineering; IHE Delft Institute for Water Education)","van Lier, J.B. (promotor); Brdjanovic, Damir (promotor); Lopez Vazquez, Carlos (copromotor); Delft University of Technology (degree granting institution)","2016","On a global scale, sewage represents the main point-source of water pollution and is also the predominant source of nitrogen contamination in urban regions. The present research is focused on the study of the main challenges that need to be addressed in order to achieve a successful inorganic nitrogen post-treatment of anaerobic effluents in the mainstream. The post-treatment is based on autotrophic nitrogen removal. The challenges are classified in terms of operational features and system configuration, namely: (i) the short-term effects of organic carbon source, the COD/N ratio and the temperature on the autotrophic nitrogen removal; the results from this study confirms that the Anammox activity is strongly influenced by temperature, in spite of the COD source and COD/N ratios applied. (ii) The long-term performance of the Anammox process under low nitrogen sludge loading rate (NSLR) and moderate to low temperatures; it demonstrates that NSLR affects nitrogen removal efficiency, granular size and biomass concentration of the bioreactor. (iii) The Anammox cultivation in a closed sponge-bed trickling filter (CSTF) and (iv) the autotrophic nitrogen removal over nitrite in a sponge-bed trickling filter (STF). Both types of Anammox sponge-bed trickling filters offer a plane technology with good nitrogen removal efficiency.","Anaerobic post treatment, Anammox, batch processing, Ammonium Oxidizing Organisms (AOO), Candidatus Brocadia fulgida, chemical oxygen demand/nitrogen (COD/N) ratio, closed sponge-bed trickling filter (CSTF), down flow hanging sponge (DHS) systems, granular biomass, immobilized biomass, mainstream nitrogen removal, Nitrite Oxidizing Organisms (NOO), nitrogen sludge loading rate, organic carbon source, partial nitritation, sponge-bed trickling filter (STF), temperature, upflow anaerobic sludge blanket (UASB) reactor.","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-03591-1","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Sanitary Engineering","","",""
"uuid:14dc1cdd-618b-495f-a32a-d21209f47153","http://resolver.tudelft.nl/uuid:14dc1cdd-618b-495f-a32a-d21209f47153","3D surface-wave estimation and separation using a closed-loop approach","Ishiyama, T. (TU Delft Applied Geophysics and Petrophysics; INPEX Corporation); Blacquière, G. (TU Delft Applied Geophysics and Petrophysics); Verschuur, D.J. (TU Delft ImPhys/Acoustical Wavefield Imaging); Mulder, W.A. (TU Delft Applied Geophysics and Petrophysics; Shell Global Solutions International B.V.)","","2016","Surface waves in seismic data are often dominant in a land or shallow-water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near-surface characterization. However, their complex properties make the surface-wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three-dimensional surface-wave estimation and separation using an iterative closed-loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward-modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface-wave model is parameterized by the frequency-dependent slowness and source properties for each surface-wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.","Data processing; Inverse problem; Near surface; Noise; Parameter estimation; Separation; Surface wave","en","journal article","","","","","","","","2018-12-31","","","Applied Geophysics and Petrophysics","","",""
"uuid:727dff70-2536-42f5-99cb-659d086404a2","http://resolver.tudelft.nl/uuid:727dff70-2536-42f5-99cb-659d086404a2","An Ideal-Theoretic Criterion for Localization of an Unknown Number of Sources","Morency, M.W. (TU Delft Signal Processing Systems); Vorobyov, Sergiy A. (Aalto University); Leus, G.J.T. (TU Delft Signal Processing Systems)","Matthews, Michael B. (editor)","2016","Source localization is among the most fundamental problems in statistical signal processing. Methods which rely on the orthogonality of the signal and noise subspaces, such as Pisarenko’s method, MUSIC, and root-MUSIC are some of the most widely used algorithms to solve this problem. As a common feature, these methods require both a-priori knowledge of the number of sources, and an estimate of the noise subspace. Both requirements are complicating factors to the practical implementation of the algorithms, and sources of potentially severe error. In this paper, we propose a new localization criterion based on the algebraic structure of the noise subspace. An algorithm is proposed which adaptively learns the number of sources and estimates their locations. Simulation results show significant improvement over root-MUSIC, even when the correct number of sources is provided to the root-MUSIC algorithm.","Eigenvalues and eigenfunctions; Multiple signal classification; Signal processing algorithms; Generators; Position measurement; Clustering algorithms; Estimation","en","conference paper","IEEE","","","","","","","","","","Signal Processing Systems","","",""
"uuid:fa82dbee-5440-466f-a6b5-8444b704e934","http://resolver.tudelft.nl/uuid:fa82dbee-5440-466f-a6b5-8444b704e934","Modeling and reconstruction of time series of passive microwave data by Discrete Fourier Transform guided filtering and Harmonic Analysis","Shang, H. (TU Delft Optical and Laser Remote Sensing; Chinese Academy of Sciences; Joint Center for Global Change Studies (JCGCS)); Jia, L. (Chinese Academy of Sciences; Joint Center for Global Change Studies (JCGCS)); Menenti, M. (TU Delft Optical and Laser Remote Sensing; Chinese Academy of Sciences)","","2016","Daily time series of microwave radiometer data obtained in one-orbit direction are full of observation gaps due to satellite configuration and errors from spatial sampling. Such time series carry information about the surface signal including surface emittance and vegetation attenuation, and the atmospheric signal including atmosphere emittance and atmospheric attenuation. To extract the surface signal from this noisy time series, the Time Series Analysis Procedure (TSAP) was developed, based on the properties of the Discrete Fourier Transform (DFT). TSAP includes two stages: (1) identify the spectral features of observation gaps and errors and remove them with a modified boxcar filter; and (2) identify the spectral features of the surface signal and reconstruct it with the Harmonic Analysis of Time Series (HANTS) algorithm. Polarization Difference Brightness Temperature (PDBT) at 37 GHz data were used to illustrate the problems, to explain the implementation of TSAP and to validate this method, due to the PDBT sensitivity to the water content both at the land surface and in the atmosphere. We carried out a case study on a limited heterogeneous crop land and lake area, where the power spectrum of the PDBT time series showed that the harmonic components associated with observation gaps and errors have periods ≤8 days. After applying the modified boxcar filter with a length of 10 days, the RMSD between raw and filtered time series was above 11 K, mainly related to the power reduction in the frequency range associated with observation gaps and errors. Noise reduction is beneficial when applying PDBT observations to monitor wet areas and open water, since the PDBT range between dryland and open water is about 20 K. The spectral features of the atmospheric signal can be revealed by time series analysis of rain-gauge data, since the PDBT at 37 GHz is mainly attenuated by hydrometeors that yield precipitation. Thus, the spectral features of the surface signal were identified in the PDBT time series with the help of the rain-gauge data. HANTS reconstructed the upper envelope of the signal, i.e., correcting for atmospheric influence, while retaining the spectral features of the surface signal. To evaluate the impact of TSAP on retrieval accuracy, the fraction of Water Saturated Surface (WSS) in the region of Poyang Lake was retrieved with 37 GHz observations. The retrievals were evaluated against estimations of the lake area obtained with MODerate-resolution Imaging Spectroradiometer (MODIS) and Advanced Synthetic Aperture Radar (ASAR) data. The Relative RMSE on WSS was 39.5% with unfiltered data and 23% after applying TSAP, i.e., using the estimated surface signal only.","Data process; Discrete Fourier Transform (DFT); Filter design; Harmonic Analysis; Microwave radiometer data","en","journal article","","","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:a11c6c6a-9fdb-4b7d-8c64-48ddaebc9d57","http://resolver.tudelft.nl/uuid:a11c6c6a-9fdb-4b7d-8c64-48ddaebc9d57","Transmission electron imaging in the Delft multibeam scanning electron microscope 1","Ren, Y. (TU Delft ImPhys/Charged Particle Optics); Kruit, P. (TU Delft ImPhys/Charged Particle Optics)","","2016","Our group is developing a multibeam scanning electron microscope (SEM) with 196 beams in order to increase the throughput of SEM. Three imaging systems using, respectively, transmission electron detection, secondary electron detection, and backscatter electron detection are designed in order to make it as versatile as a single beam SEM. This paper focuses on the realization of the transmission electron imaging system, which is motivated by biologists' interest in the particular contrast this can give. A thin sample is placed on fluorescent material which converts the transmitted electrons to photons. Then, the 196 photon beams are focused with a large magnification onto a camera via a high quality optical microscope integrated inside the vacuum chamber. Intensities of the transmission beams are retrieved from the camera images and constructed to form each beam's image using an off line image processing program. Experimental results prove the working principle of transmission electron imaging and show that details of 10-20 nm in images of biological specimen are visible. Problems encountered in the results are discussed and plans for future improvements are suggested.","Cameras; Scanning electron microscopy; Image detection systems; Electron beams; Image processing","en","journal article","","","","","","","","2017-10-27","","","ImPhys/Charged Particle Optics","","",""
"uuid:68241e91-9025-42f6-b0d2-0f5e000a02a8","http://resolver.tudelft.nl/uuid:68241e91-9025-42f6-b0d2-0f5e000a02a8","Can Data from BIMs be Used as Input for a 3D Cadastre?","Oldfield, Jennifer; van Oosterom, Peter; Quak, Wilko; van der Veen, Jeroen; Beetz, Jakob","","2016","Much work has already been done on how a 3D Cadastre should best be developed. An inclusive information model, the Land Administration Model (LADM ISO 19152) has been developed to provide an international framework for how this can best be done. While this generic framework encompasses a wide range of eventualities, it does not prescribe the data format. One existing source from which data could be obtained is 3D Building Information Models (BIMs), or more specifically in this context, BuildingSMART’s Industry Foundation Class (IFC). Obtaining data is only one part of the process from moving from a 2D to a 3D Cadastre. An efficient collaborative workflow, preferably digital, also needs to be developed. This digital workflow would determine what the 3D Cadastre needs from a 3D BIM and the process of extracting it in addition to exchange requirements. Foundations, however, would need to be laid in order to facilitate this process. To begin with, in spite of the fact that the Industry Foundation Class (IFC) is already quite an extensive model, in order to satisfy the requirements of cadastral legal spaces it would need to be enriched further. Enriching it would enable data for a 3D Cadastre to be extracted from both as-designed and as-built BIMs. Experience has shown that process harmonization between organizations is non-trivial and dependent on specific organizations within countries Standardizing at an international level is therefore something wiser to avoid. However, a collaborative workflow described in BuildingSMART’s Information Delivery Manual (IDM) is a useful illustration of how the involved actors could collaborate. Moreover, communicating the information extraction process of BIM data to 3D parcels to actors in the building world using their own lingua franca could be beneficial.","as-built BIM; BIM; Open BIM; Industry Foundation Classes (IFC); Business Process Modelling Notation (BPMN); Information Delivery Manual (IDM); Smart Cities, 3D Cadastre; as-designed BIM","en","conference paper","","","","","","","","","","","","","",""
"uuid:8f380f95-460a-4741-8e5d-6062bd7dd04b","http://resolver.tudelft.nl/uuid:8f380f95-460a-4741-8e5d-6062bd7dd04b","Can Data from BIMs be Used as Input for a 3D Cadastre?","Oldfield, Jennifer; van Oosterom, P.J.M. (TU Delft OLD Department of GIS Technology); Quak, C.W. (TU Delft OLD Department of GIS Technology); Veen, Jeroen van der; Beetz, Jakob","van Oosterom, P.J.M. (editor); Dimopoulou, Efi (editor); Fendel, Elfriede M. (editor)","2016","Much work has already been done on how a 3D Cadastre should best be developed. An inclusive information model, the Land Administration Model (LADM ISO 19152) has been developed to provide an international framework for how this can best be done. While this generic framework encompasses a wide range of eventualities, it does not prescribe the data format. One existing source from which data could be obtained is 3D Building Information Models (BIMs), or more specifically in this context, BuildingSMART’s Industry Foundation Class (IFC). Obtaining data is only one part of the process from moving from a 2D to a 3D Cadastre. An efficient collaborative workflow, preferably digital, also needs to be developed. This digital workflow would determine what the 3D Cadastre needs from a 3D BIM and the process of extracting it in addition to exchange requirements. Foundations, however, would need to be laid in order to facilitate this process. To begin with, in spite of the fact that the Industry Foundation Class (IFC) is already quite an extensive model, in order to satisfy the
requirements of cadastral legal spaces it would need to be enriched further. Enriching it would enable data for a 3D Cadastre to be extracted from both as-designed and as-built BIMs. Experience has shown that process harmonization between organizations is non-trivial and dependent on specific organizations within countries Standardizing at an international level is therefore something wiser to avoid. However, a collaborative workflow described in BuildingSMART’s Information Delivery Manual (IDM) is a useful illustration of how the involved actors could collaborate. Moreover, communicating the information extraction
process of BIM data to 3D parcels to actors in the building world using their own lingua franca could be beneficial.","BIM; Open BIM; Industry Foundation Classes (IFC); Business Process Modelling Notation (BPMN) Information Delivery Manual (IDM); Smart Cities; 3D Cadastre; as-designed BIM; as-built BIM","en","conference paper","International Federation of Surveyors (FIG)","","","","","","","","","","OLD Department of GIS Technology","","",""
"uuid:9fa27d25-1e58-473e-828a-b219bf465438","http://resolver.tudelft.nl/uuid:9fa27d25-1e58-473e-828a-b219bf465438","In-line monitoring of solvents during CO2 absorption using multivariate data analysis","Kachko, A. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Bardow, André (promotor); Delft University of Technology (degree granting institution)","2016","","chemometrics; in-line; chemical process monitoring; multivariate data analysis; carbon dioxide capture","en","doctoral thesis","","978-94-6186-673-8","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:361057db-a92a-4c1c-82bc-9350b8883ac0","http://resolver.tudelft.nl/uuid:361057db-a92a-4c1c-82bc-9350b8883ac0","Liquid Silicon for Printed Polycrystalline Silicon Thin-Film Transistors on Paper","Trifunovic, M. (TU Delft QID/Ishihara Lab)","Sarro, Pasqualina M (promotor); Ishihara, R. (copromotor); Delft University of Technology (degree granting institution)","2016","","Liquid Silicon; Solution-Processing; Flexible Electronics; Paper Electronics; Excimer Laser Crystallization; Printed Electronics","en","doctoral thesis","","978-94-028-0315-0","","","","","","2018-08-29","","","QID/Ishihara Lab","","",""
"uuid:70125d46-9d39-412a-8ae2-403992d91e6f","http://resolver.tudelft.nl/uuid:70125d46-9d39-412a-8ae2-403992d91e6f","Can bed load transport drive varying depositional behaviour in river delta environments?","van der Vegt, H. (TU Delft Applied Geology); Storms, J.E.A. (TU Delft Applied Geology); Walstra, D.J.R. (TU Delft Coastal Engineering; Deltares); Howes, N.C. (Shell Projects & Technology)","","2016","Abstract Understanding the processes and conditions at the time of deposition is key to the development of robust geological models which adequately approximate the heterogeneous delta morphology and stratigraphy they represent. We show how the mechanism of sediment transport (the proportion of the sediment supply transported as bed load vs. suspended load) impacts channel kinematics, delta morphology and stratigraphy, to at least the same extent as the proportion of cohesive sediment supply. This finding is derived from 15 synthetic delta analogues generated by processes-based simulations in Delft3D. The model parameter space varies sediment transport mechanism against proportions of cohesive sediment whilst keeping the total sediment mass input constant. Proximal morphology and kinematics previously associated with sediment cohesivity are also produced by decreasing the proportion of bed load sediment transport. However, distal depositional patterns are different for changes in sediment transport and sediment load cohesivity. Changes in sediment transport mechanisms are also shown to impact clinoform geometry as well as the spatiotemporal scale of autogenic reorganisation through channel avulsions. We conclude that improving insight into the ratio of bed load to suspended load is crucial to predicting the geometric evolution of a delta.","Process-based modelling","en","journal article","","","","","","","","2018-11-01","","","Applied Geology","","",""
"uuid:1cd47604-e5ad-468b-995b-3a8be155ac0b","http://resolver.tudelft.nl/uuid:1cd47604-e5ad-468b-995b-3a8be155ac0b","Planning Under Uncertainty for Aggregated Electric Vehicle Charging with Renewable Energy Supply","Walraven, E.M.P. (TU Delft Algorithmics); Spaan, M.T.J. (TU Delft Algorithmics)","Kaminka, Gal A. (editor); Fox, Maria (editor); Bouquet, Paolo (editor); Hüllermeier, Eyke (editor); Dignum, Virginia (editor); Dignum, Frank (editor); van Harmelen, Frank (editor)","2016","Renewable energy sources introduce uncertainty regarding generated power in smart grids. For instance, power that is generated by wind turbines is time-varying and dependent on the weather. Electric vehicles will become increasingly important in the development of smart grids with a high penetration of renewables, because their flexibility makes it possible to charge their batteries when renewable supply is available. Charging of electric vehicles can be challenging, however, because of uncertainty in renewable supply and the potentially large number of vehicles involved. In this paper we propose a vehicle aggregation framework which uses Markov Decision Processes to control electric vehicles and deals with uncertainty in renewable supply. We present a grouping technique to address the scalability aspects of our framework. In experiments we show that the aggregation framework maximizes the profit of the aggregator, reduces cost of customers and reduces consumption of conventionally-generated power.","smart grids; electric vehicles; EV charging; markov decision processes; planning under uncertainty","en","conference paper","IOS Press","","","","","","","","","","Algorithmics","","",""
"uuid:c5534962-50ed-49c1-b279-e3ffe7799658","http://resolver.tudelft.nl/uuid:c5534962-50ed-49c1-b279-e3ffe7799658","Design and Experimental Evaluation of Distributed Heterogeneous Graph-Processing Systems","Guo, Y. (TU Delft Dataintensive Systems); Varbanescu, A.L. (Universiteit van Amsterdam); Epema, D.H.J. (TU Delft Dataintensive Systems); Iosup, A. (TU Delft Dataintensive Systems)","","2016","Graph processing is increasingly used in a variety of domains, from engineering to logistics and from scientific computing to online gaming. To process graphs efficiently, GPU-enabled graph-processing systems such as TOTEM and Medusa exploit the GPU or the combined CPU+GPU capabilities of a single machine. Unlike scalable distributed CPU-based systems such as Pregel and GraphX, existing GPU-enabled systems are restricted to the resources of a single machine, including the limited amount of GPU memory, and thus cannot analyze the increasingly large-scale graphs we see in practice. To address this problem, we design and implement three families of distributed heterogeneous graph-processing systems that can use both the CPUs and GPUs of multiple machines. We further focus on graph partitioning, for which we compare existing graph-partitioning policies and a new policy specifically targeted at heterogeneity. We implement all our distributed heterogeneous systems based on the programming model of the single-machine TOTEM, to which we add (1) a new communication layer for CPUs and GPUs across multiple machines to support distributed graphs, and (2) a workload partitioning method that uses offline profiling to distribute the work on the CPUs and the GPUs. We conduct a comprehensive real-world performance evaluation for all three families. To ensure representative results, we select 3 typical algorithms and 5 datasets with different characteristics. Our results include algorithm run time, performance breakdown, scalability, graph partitioning time, and comparison with other graph-processing systems. They demonstrate the feasibility of distributed heterogeneous graph processing and show evidence of the high performance that can be achieved by combining CPUs and GPUs in a distributed environment.","Distributed Heterogeneous Systems; Graph Processing","en","conference paper","IEEE","","","","","","","","","","Dataintensive Systems","","",""
"uuid:3f86bf04-c6af-486f-b972-bd228d84ebed","http://resolver.tudelft.nl/uuid:3f86bf04-c6af-486f-b972-bd228d84ebed","On the anatomy of nearshore sandbars: A systematic exposition of inter-annual sandbar dynamics","Walstra, D.J.R. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Ruessink, BG (promotor); Reniers, A.J.H.M. (promotor); Delft University of Technology (degree granting institution)","2016","Nearshore sandbars have a lifetime of many years, during which they exhibit cyclic, offshore directed behaviour with strong alongshore coherence. A bar is generated near the shoreline and grows in height and width while migrating offshore, before finally decaying at the seaward limit of the surf zone. It may take 10 to 15 years for a bar to exhibit this cycle. Four to five bars may occur simultaneously within a cross-shore bed profile. Alongshore variations in cross-shore bar position and bar amplitude are commonly observed. A strong or abrupt alongshore variability is referred to as a bar switch. At large spatial scales, the inter-annual bar dynamics may vary considerably across sites with very similar environmental settings. In particular, the bar cycle return period (Tr, i.e. the duration between two successive bar decay events) may differ by a factor of three to four. This type of change in Tr appears to be always present in time and is characterized as a persistent bar switch. At smaller (kilometer) scales, bar switches typically occur in areas with similar Tr-values on both sides of a bar switch and occasionally disappear when the bars re-attach. These are characterized as non-persistent bar switches. The assimilation of shoreface nourishments into the coastal system involves a strong interaction with the pre-existing sandbar system. Typically the placement of a shoreface nourishment just seaward of an outer bar reverses the bar cycle temporalily, inducing a landward migration of the bar system. The shoreface nourishment becomes absorbed in the coastal system as the new outer bar. At the distal ends of the shoreface nourishment bar switches often manifest, owing to a distinct difference in the bar migration cycle phase that is induced. Given the importance of the bar-nourishment interaction, an improved understanding of the nearshore bar dynamics is expected to improve the efficacy of shoreface nourishments. Furthermore, the long-term evolution of the nearshore barred profiles is generally considered indicative of the quality of the modelling for the response of the entire nearshore coastal system. Therefore, the ability to perform reliable and robust a-priori, long-term predictions has broad societal relevance in view of anticipated adverse impacts of climate change and sea level rise on the stability of coasts worldwide. Until now the anatomy of the nearshore sandbars has primarily been studied using field data. Although these studies have provided insight into how the geometric bar parameters respond to the external forcings, no comprehensive conceptual framework is available that explains the full life cycle of a sandbar and its associated characteristics. The overarching objective of this study is to elucidate the anatomy of the inter-annual bar morphology using a combined data and model approach. This overarching objective is in turn devolved into three objectives aiming to understand key features of bar morphology and a further objective to enable a comprehensive modelling approach based on the acquired insights. The latter objective involves the development of an input-reduction framework for advanced process-based forward modelling of the inter-annual bar morphology.
1) To elucidate the morphodynamic processes that result in cross-shore transient sandbar amplitude responses (i.e. the transition from bar growth in the intertidal and across surf zone to sandbar decay at the seaward edge of the surf zone). 2) To establish the role of cross-shore processes in non-persistent bar switches. 3) To identify the dominant environmental variables and the associated mechanisms that govern the bar cycle return period. 4) To develop an input-reduction framework to enable the application of state-of-the-art process based forward area models to simulate the multi-annual bar behaviour and nearshore morphology.
A comprehensive study approach is adopted in which observations of the nearshore morphology are combined with detailed forward modeling of the bar dynamics at Noordwijk (The Netherlands) utilizing wave and waterlevel observations as boundary conditions. The Noordwijk model acts as a reference for additional simulations at Egmond (The Netherland) and at Hasaki (Japan) to address the specific characteristics of the nearshore sandbar morphodynamics as outlined above.
The transient cross-shore bar amplitude response Based on a three-year hindcast of a bar cycle at Noordwijk (Netherlands) and on additional synthetic runs using a wave-averaged cross-shore process model, the dominant mechanisms that govern the bar amplitude growth and decay during net inter-annual offshore migration are identified. The bar amplitude response is particularly sensitive to the water depth above the bar crest, hXb, and the angle of wave incidence, θ. These variables largely control the amount of waves breaking on the bar and the strength and cross-shore distribution of the associated longshore current. The longshore current has its maximum landward of the bar crest, inducing additional stirring of sediment on the landward bar slope and trough. The enhanced sediment concentration in the trough region shifts the cross-shore transport peak landward of the bar crest, forcing bar amplitude growth during offshore migration. For increased hXb-values wave breaking becomes less frequent, reducing the influence of the longshore current on sediment stirring. Therefore, the resulting dominance of the cross-shore current results in a sediment transport peak at, or just seaward of, the bar crest causing bar amplitude decay. All four types of bar response (viz. all combinations of onshore/offshore migration and bar amplitude growth/decay) can occur for a single wave height and wave period combination, depending on hXb and θ. Additional hindcast runs in which the wave direction was assumed time-invariant confirmed that hXb and θ largely control the transient bar amplitude response.
The mechanics of non-persistent bar switches Intra-site alongshore variability is greatest when bars display km-scale disruptions, indicative of a distinct alongshore phase shift in the bar cycle. An outer bar is then, for example, attached to an inner bar, referred to as a non-persistent bar switch. This large-scale alongshore variability is investigated by applying the reference model at 24 transects along a 6 km section of the barred beach at Noordwijk (The Netherlands). When alongshore variability is limited, the model predicts that the bars migrate offshore at approximately the same rate (i.e. the bars remain in phase). Only under specific bar configurations with high wave-energy levels is an increase in the alongshore variability predicted. This suggests that cross-shore processes may trigger a switch in the case of specific antecedent morphological configurations combined with storm conditions. It is expected that three-dimensional (3D) flow patterns augment the alongshore variability in such instances. In contrast to the observed bar behaviour, predicted bar morphologies on either side of a switch remain in different phases, even though the bars are occasionally located at a similar cross-shore position. In short, the 1D profile model is not able to remove a bar switch. This data-model mismatch suggests that 3D flow patterns are key to the dissipation of bar switches.
The mechanics of persistent bar switches and the bar cycle return period To date, data-analytic studies have had only partial success in explaining differences in Tr, establishing at best weak correlations to local environmental characteristics. In the present approach the process-based profile reference model is utilized to investigate the non-linear interactions between the hydrodynamic forcing and the morphodynamic profile response for two sites. Despite strong similarity in environmental conditions, the sites at Noordwijk and Egmond on the Holland coast exhibit distinctly different Tr values. The detailed comparison of modelling results enables a consistent investigation of the role of specific parameters at a level of detail that could not have been achieved from observations alone, and provides insights into the mechanisms that govern Tr. The results reveal that the bed slope at the barred zone is the most important parameter governing Tr. As a bar migrates further offshore, a steeper slope results in a stronger relative increase in hXb which reduces wave breaking and in turn reduces the offshore migration rate. The deceleration of the offshore migration rate as the bar moves to deeper water - the morphodynamic feedback loop - contrasts with the initial enhanced offshore migration behaviour of the bar. The initial behaviour is determined by the intense wave breaking associated with the steeper profile slope. These mechanisms explain the counter-intuitive observations at Egmond where Tr is significantly longer than at Noordwijk despite Egmond having the more energetic wave climate which typically reduces Tr.
Input reduction for inter-annual advanced forward model applications In order to avoid excessively long computation times, input reduction is imperative for the application of advanced forward morphodynamic area models to consider long-term (>years) predictions. Here, an input reduction framework for wave-dominated coastal settings is introduced. The framework comprises 4 steps, viz. (1) the selection of the duration of the original (full) time series of wave forcing, (2) the selection of the representative wave conditions, (3) the sequencing of these conditions, and (4) the time span after which the sequence is repeated. In step (2), the chronology of the original series is retained, while that is no longer the case in steps (3) and (4). We apply the framework to two different sites (Noordwijk, The Netherlands and Hasaki, Japan) with multiple nearshore sandbars but contrasting long-term offshore-directed behaviour: at Noordwijk the offshore migration is gradual and not coupled to individual storms, while at Hasaki the offshore migration is more episodic, and wave chronology appears to control the long-term evolution. The performance of the model with reduced wave climates is compared with a simulation with the actual (full) wave-forcing series. It is demonstrated that input reduction can dramatically affect long-term predictions, to such an extent that the main characteristics of the offshore bar cycle are no longer reproduced. This was the case at Hasaki, in particular, where all synthetic series that no longer retain the initial chronology (steps 3 and 4) lead to rather unrealistic long-term simulations. At Noordwijk, synthetic series can result in realistic behaviour, provided that the time span after which the sequence is repeated is not too large; the reduction of this time span has the same positive effect on the simulation as increasing the number of selected conditions in step 2. It is further demonstrated that, although storms result in the largest morphological change, conditions with low to intermediate wave energy must be retained to obtain realistic long-term sandbar behaviour. The input-reduction framework must be applied in an iterative fashion to obtain a reduced wave climate that is able to simulate long-term sandbar behaviour sufficiently accurately within an acceptable computation time. These results imply that it is essential to consider input reduction as an intrinsic part of any model set-up, calibration and validation effort. The study outcomes indicate clearly that a relatively simple model can be utilized to study the highly non-linear interaction between the nearshore hydrodynamics and morphology in great detail. This was achieved through carefully designed numerical experiments in which the influence of a specific process or environmental variable was isolated and identified. Although the model only considers cross-shore processes, the numerical experiments generated new insights into the importance of 3D processes under particular morphological conditions of the nearshore barred profiles. Even though the model was successfully calibrated at Noordwijk, the application at Egmond showed a significantly reduced predictive capacity. The model was able to reproduce the main characteristics of the inter-annual bar morphodynamics, but the bar cycle return period was under-estimated by about 30%. This suggests that the model can capture trends fairly well, but is unable to produce accurate absolute predictions - a finding that has broader implications. As stated earlier, accurate predictions of the long-term evolution of the nearshore barred profiles are generally considered indicative of the quality of the modelling of the entire nearshore coastal system. Consequently, further improvement of morphodynamic process-based models, particularly for the nearshore zone, constitutes a major research priority.","Sandbars; Bar decay; Process based modeling; Unibest-TC; Cyclic bar behavior; Input reduction; Input filtering; Morphodynamic modeling; alongshore variability; bar switching; Noordwijk; Argus; Jarkus; morphodynamic feedback loop; Egmond; inter-annual bar dynamics","en","doctoral thesis","","978-94-6186-647-9","","","","","","","","","Coastal Engineering","","",""
"uuid:53c1b3eb-f964-4971-90a0-77b26ca940fc","http://resolver.tudelft.nl/uuid:53c1b3eb-f964-4971-90a0-77b26ca940fc","Synchronization and Spin-Flop Transitions for a Mean-Field XY Model in Random Field","Collet, F. (TU Delft Applied Probability; University of Bologna); Ruszel, W.M. (TU Delft Applied Probability)","","2016","We characterize the phase space for the infinite volume limit of a ferromagnetic mean-field XY model in a random field pointing in one direction with two symmetric values. We determine the stationary solutions and detect possible phase transitions in the interaction strength for fixed random field intensity. We show that at low temperature magnetic ordering appears perpendicularly to the field. The latter situation corresponds to a spin-flop transition.","Disordered models; Spin-flop transitions; XY models; Interacting particle systems; Mean-field interaction; Phase transition; Reversible Markov processes","en","journal article","","","","","","","","","","","Applied Probability","","",""
"uuid:3853c34e-1ffb-4181-bedd-235a09957e1b","http://resolver.tudelft.nl/uuid:3853c34e-1ffb-4181-bedd-235a09957e1b","Space-shift sampling of graph signals","Segarra, Santiago (University of Pennsylvania); Marques, Antonio G. (King Juan Carlos University); Leus, G.J.T. (TU Delft Signal Processing Systems); Ribeiro, Alejandro (University of Pennsylvania)","Dong, Min (editor); Zheng, Thomas Fang (editor)","2016","A novel scheme for sampling graph signals is proposed. Space-shift sampling can be understood as a hybrid scheme that combines selection sampling -- observing the signal values on a subset of nodes - and aggregation sampling - observing the signal values at a single node after successive aggregation of local data. Under the assumption of bandlimitedness, we state conditions and propose strategies for signal recovery in different settings. Being a more general procedure, space-shift sampling achieves smaller reconstruction errors than current schemes, as we illustrate through the reconstruction of the industrial activity in a graph of the U.S. economy.
The process of modelling and simulation in this specific production environment is discussed in detail. Problem specification and a new integrated simulation approach are presented. A case study in a large coal mine is used to demonstrate the impacts and evaluate the results in terms of reaching optimal production control decisions to increase average equipment utilization and control coal quality and quantity. The new approach is expected to lead to more robust decisions, improved efficiencies, and better coal quality management.","continuous mining; scheduling; stochastic process simulation; geological uncertainty","en","journal article","","","","","","","","","","","Resource Engineering","","",""
"uuid:c049f07f-037e-434d-b834-1178fc669a3b","http://resolver.tudelft.nl/uuid:c049f07f-037e-434d-b834-1178fc669a3b","Radioastronomical image reconstruction with regularized least squares","Naghibzadeh, S. (TU Delft Signal Processing Systems); Mouri Sardarabadi, A. (TU Delft Signal Processing Systems); van der Veen, A.J. (TU Delft Signal Processing Systems)","Dong, Min (editor); Zheng, Thomas Fang (editor)","2016","Image formation using the data from an array of sensors is a familiar problem in many fields such as radio astronomy, biomedical and geodetic imaging. The problem can be formulated as a least squares (LS) estimation problem and becomes ill-posed at high resolutions, i.e. large number of image pixels. In this paper we propose two regularization methods, one based on weighted truncation of the eigenvalue decomposition of the image deconvolution matrix and the other based on the prior knowledge of the ""dirty image"" using the available data. The methods are evaluated by simulations as well as actual data from a phased-array radio telescope in the Netherlands, the Low Frequency Array Radio Telescope (LOFAR).","radio astronomy; Array signal processing; image formation; interferometry; regularization","en","conference paper","IEEE","","","","","Accepted Author Manuscript","","","","","Signal Processing Systems","","",""
"uuid:b59728ef-e225-466d-b487-e601cbd5f606","http://resolver.tudelft.nl/uuid:b59728ef-e225-466d-b487-e601cbd5f606","On the Intersite Variability in Inter-Annual Nearshore Sandbar Cycles","Walstra, D.J.R. (TU Delft Coastal Engineering; Deltares); Wesselman, Daan (Universiteit Utrecht); van der Deyl, Eveline (Universiteit Utrecht); Ruessink, BG (Universiteit Utrecht)","","2016","Inter-annual bar dynamics may vary considerably across sites with very similar environmental settings. In particular, the variability of the bar cycle return period (Tr) may differ by a factor of 3 to 4. To date, data studies are only partially successful in explaining differences in Tr, establishing at best weak correlations to local environmental characteristics. Here, we use a process-based forward model to investigate the non-linear interactions between the hydrodynamic forcing and the morphodynamic profile response for two sites along the Dutch coast (Noordwijk and Egmond) that despite strong similarity in environmental conditions exhibit distinctly different Tr values. Our exploratory modeling enables a consistent investigation of the role of specific parameters at a level of detail that cannot be achieved from observations alone, and provides insights into the mechanisms that govern Tr. The results reveal that the bed slope in the barred zone is the most important parameter governing Tr. As a bar migrates further offshore, a steeper slope results in a stronger relative increase in the water depth above the bar crest which reduces wave breaking and in turn reduces the offshore migration rate. The deceleration of the offshore migration rate as the bar moves to deeper water—the morphodynamic feedback loop—contrasts with the initial enhanced offshore migration behavior of the bar. The initial behavior is determined by the intense wave breaking associated with the steeper profile slope. This explains the counter-intuitive observations at Egmond where Tr is significantly longer than at Noordwijk despite Egmond having the more energetic wave climate which typically reduces Tr.","morphodynamic feedback loop; Egmond; Noordwijk; inter-annual bar dynamics; process based modeling; Unibest-TC; sandbars; bar switch; morphodynamic modeling; cyclic bar behavior; Jarkus","en","journal article","","","","","","","","","","","Coastal Engineering","","",""
"uuid:66deb7b5-ab26-4f08-8d4b-c5cdf01ad7bd","http://resolver.tudelft.nl/uuid:66deb7b5-ab26-4f08-8d4b-c5cdf01ad7bd","Velocity analysis of simultaneous-source data using high-resolution semblance: Coping with the strong noise","Gan, S.; Wang, S.; Chen, Y.; Qu, S.; Zu, S.","","2016","Direct imaging of simultaneous-source (or blended) data, without the need of deblending, requires a precise subsurface velocity model. In this paper, we focus on the velocity analysis of simultaneous-source data using the normal moveout-based velocity picking approach.We demonstrate that it is possible to obtain a precise velocity model directly from the blended data in the common-midpoint domain. The similarity-weighted semblance can help us obtain much better velocity spectrum with higher resolution and higher reliability compared with the traditional semblance. The similarity-weighted semblance enforces an inherent noise attenuation solely in the semblance calculation stage, thus it is not sensitive to the intense interference. We use both simulated synthetic and field data examples to demonstrate the performance of the similarity-weighted semblance in obtaining reliable subsurface velocity model for direct migration of simultaneous-source data. The migrated image of blended field data using prestack Kirchhoff time migration approach based on the picked velocity from the similarity-weighted semblance is very close to the migrated image of unblended data.","image processing; controlled source seismology","en","journal article","Oxford University Press","","","","","","","","Applied Sciences","ImPhys/Imaging Physics","","","",""
"uuid:f0408346-e7ce-4f1b-b23f-d986413e058d","http://resolver.tudelft.nl/uuid:f0408346-e7ce-4f1b-b23f-d986413e058d","All-optical wavelength conversion by picosecond burst absorption in colloidal PbS quantum dots","Geiregat, P.A. (Universiteit Gent); Houtepen, A.J. (TU Delft ChemE/Opto-electronic Materials; Universiteit Gent); Van Thourhout, Dries (Universiteit Gent); Hens, Zeger (Universiteit Gent)","","2016","All-optical approaches to change the wavelength of a data signal are considered more energy-and cost-effective than current wavelength conversion schemes that rely on back and forth switching between the electrical and optical domains. However, the lack of cost-effective materials with sufficiently adequate optoelectronic properties hampers the development of this so-called all-optical wavelength conversion. Here, we show that the interplay between intraband and band gap absorption in colloidal quantum dots leads to a very strong and ultrafast modulation of the light absorption after photoexcitation in which slow components linked to exciton recombination are eliminated. This approach enables all-optical wavelength conversion at rates matching state-of-the-art convertors in speed, yet with cost-effective solution-processable materials. Moreover, the stronger light-matter interaction allows for implementation in small-footprint devices with low switching energies. Being a generic property, the demonstrated effect opens a pathway toward low-power integrated photonics based on colloidal quantum dots as the enabling material.","All-optical signal processing; Intraband absorption; Nanocrystals; Transient absorption","en","journal article","","","","","","Accepted Author Manuscript","","2016-12-21","","","ChemE/Opto-electronic Materials","","",""
"uuid:59b7d2e6-3a4d-40ef-b852-3789bb53f19d","http://resolver.tudelft.nl/uuid:59b7d2e6-3a4d-40ef-b852-3789bb53f19d","Impact of Two Plumes' Interaction on Submarine Melting of Tidewater Glaciers: A Laboratory Study","Cenedese, C.; Gatto, V.M.","","2016","Idealized laboratory experiments investigate the glacier–ocean boundary dynamics near a vertical glacier in a two-layer stratified fluid. Discharge of meltwater runoff at the base of the glacier (subglacial discharge) enhances submarine melting. In the laboratory, the effect of multiple sources of subglacial discharge is simulated by introducing freshwater at freezing temperature from two point sources at the base of an ice block representing the glacier. The buoyant plumes of cold meltwater and subglacial discharge water entrain warm ambient water, rise vertically, and interact within a layer of depth H2 if the distance between the sources x0 is smaller than H2?/0.35, where ? is the entrainment constant. The plume water detaches from the glacier face at the interface between the two layers and/or at the free surface, as confirmed by previous numerical studies and field observations. A plume model is used to explain the observed nonmonotonic dependence of submarine melting on the sources’ separation. The distance between the two sources influences the entrainment of warm water in the plumes and consequently the amount of submarine melting and the final location of the meltwater within the water column. Two interacting plumes located very close together are observed to melt approximately half as much as two independent plumes. The inclusion, or parameterization, of the dynamics regulating multiple plumes’ interaction is therefore necessary for a correct estimate of submarine melting. Hence, the distribution and number of sources of subglacial discharge may play an important role in glacial melt rates and fjord stratification and circulation.","geographic location/entity; glaciers; circulation/ dynamics; buoyancy; entrainment; ocean dynamics;; small scale processes; models and modeling; Laboratory/physical models","en","journal article","American Meteorological Society","","","","","","","2016-07-21","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:c0cb78c9-e7db-4260-91ef-d82232750e33","http://resolver.tudelft.nl/uuid:c0cb78c9-e7db-4260-91ef-d82232750e33","Stochastic convection parameterization with Markov Chains in an intermediate-complexity GCM","Dorrestijn, J.; Crommelin, D.T.; Siebesma, A.P.; Jonker, H.J.J.; Selten, F.","","2016","Conditional Markov chain (CMC) models have proven to be promising building blocks for stochastic convection parameterizations. In this paper, it is demonstrated how two different CMC models can be used as mass flux closures in convection parameterizations. More specifically, the CMC models provide a stochastic estimate of the convective area fraction that is directly proportional to the cloud-base mass flux. Since, in one of the models, the number of CMCs decreases with increasing resolution, this approach makes convection parameterizations scale aware and introduces stochastic fluctuations that increase with resolution in a realistic way. Both CMC models are implemented in a GCM of intermediate complexity. It is shown that with the CMC models, trained with observational data, it is possible to improve both the subgrid-scale variability and the autocorrelation function of the cloud-base mass flux as well as the distribution of the daily accumulated precipitation in the tropics. Hovmöller diagrams and wavenumber–frequency diagrams of the equatorial precipitation indicate that, in this specific GCM, convectively coupled equatorial waves are more sensitive to the mean cloud-base mass flux than to stochastic fluctuations. A smaller mean mass flux tends to increase the power of the simulated MJO and to diminish equatorial Kelvin waves.","physical meteorology and climatology; convective-scale processes; cumulus clouds; models and modeling; general circulation models; parameterization; stochastic models; subgrid-scale processes","en","journal article","American Meteorological Society","","","","","","","2016-07-01","Civil Engineering and Geosciences","Geoscience and Remote Sensing","","","",""
"uuid:5a97e6ff-26e5-44fa-887d-7700acd37e98","http://resolver.tudelft.nl/uuid:5a97e6ff-26e5-44fa-887d-7700acd37e98","A ship egress analysis method using spectral Markov decision processes","Kana, A.A.","Kana, A.A. (advisor)","2016","","ship design; decision making; egress analysis; Markov decision process; eigenvalue analysis","","conference paper","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:ebc15b9d-5db4-4a51-a456-b217a7001416","http://resolver.tudelft.nl/uuid:ebc15b9d-5db4-4a51-a456-b217a7001416","A decision-making framework for planning lifecycle ballast water treatment compliance","Kana, A.A.","Kana, A.A. (advisor)","2016","","ship design; decision making; ballast water compliance; Markov decision process; eigenvalue analysis","","conference paper","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:7bfb2430-330a-48e3-ad43-a348b878852b","http://resolver.tudelft.nl/uuid:7bfb2430-330a-48e3-ad43-a348b878852b","A ship egress analysis method using spectral Markov decision processes","Kana, A.A.","Kana, A.A. (advisor)","2016","","ship design; decision making; egress analysis; Markov decision process; eigenvalue analysis","","conference paper","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:283f67cd-3f2e-49d4-9c0b-0c3e5bd88b0f","http://resolver.tudelft.nl/uuid:283f67cd-3f2e-49d4-9c0b-0c3e5bd88b0f","Observing group decision making processes","Delic, Amra (Technische Universität Wien); Neidhardt, Julia (Technische Universität Wien); Nguyen, Thuy-Ngoc (Free University of Bozen-Bolzano); Ricci, Francesco (Free University of Bozen-Bolzano); Rook, L. (TU Delft Economics of Technology and Innovation); Werthner, Hannes (Technische Universität Wien); Zanker, Markus (Free University of Bozen-Bolzano)","","2016","Most research on group recommender systems relies on the assumption that individuals have conflicting preferences; in order to generate group recommendations the system should identify a fair way of aggregating these preferences. Both empirical studies and theoretical frameworks have tried to identify the most effective preference aggregation techniques without coming to definite conclusions. In this paper, we propose to approach group recommendation from the group dynamics perspective and analyze the group decision making process for a particular task (in the travel domain). We observe several individual and group properties and correlate them to choice satisfaction. Supported by these initial results we therefore advocate for the development of new group recommendation techniques that consider group dynamics and support the full group decision making process.","Group recommender systems; User study; Preference aggregation; Group decision processes","en","conference paper","","","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:4d24378e-0f27-4159-ac42-9b672eda32a9","http://resolver.tudelft.nl/uuid:4d24378e-0f27-4159-ac42-9b672eda32a9","The governance of flood risk planning in Guangzhou, China: using the past to study the present","Meng, M. (TU Delft Spatial Planning and Strategy); Dabrowski, M.M. (TU Delft Spatial Planning and Strategy)","Hein, Carola (editor)","2016","Based on the framework of governance adapted from the work of Patsy Healey and drawing on the case of Guangzhou, which is regarded as the most vulnerable city in China to flooding and waterlogging, this paper adds to the literature on urban climate change adaptation. It does so by shedding light on the history of the city’s struggle against the water and examining why the current spatial planning and flood risk management fails to address the growing flood risk linked with climate change. The paper distinguishes two major transformations of the approach to dealing with water in Guangzhou. Historically, the city was built under the influence of Fengshui Philosophy and co-existed with water. Then, the approach shifted towards engineering-based solutions to containing flood risk under the stress of rapid city expansion. After that, in the context of a changing climate, to minimise flood risk the local government is transferring its priorities from the planning of hard engineering solutions (back) towards soft nature-based solutions. However, the deeply rooted top-down planning culture and clear-cut functional separation between different departments of the local government critically affect the implementation of the policy and cooperation between the different agencies to address the present and increasingly urgent cross-cutting climate change adaptation agenda.
EuroSDR is the recognised provider of research-based knowledge to a Europe where citizens can readily benefit from geographic information. Its mission is to develop and improve methods, systems and standards for the acquisition, processing, production, maintenance, management, visualization, and dissemination of geographic reference data in support of applications and service delivery.
EuroSDR delivers advanced research-based knowledge. Its value is generated by facilitating interaction between research organisations and the public and private sector with the aim of exchanging ideas and knowledge about relevant research topics; by facilitating and contributing to research projects; and by transferring knowledge and research results to real world applications. The paper gives an overview about EuroSDR research principles, research alliances, objectives and action plans of each of the technical commissions.","EuroSDR; network; research-based knowledge; timely research; data acquisition; modelling and processing; updating and integration; information usage; business models and operations; knowledge transfer","en","conference paper","ISPRS","","","","","","","","","","Urban Data Science","","",""
"uuid:629db81d-a2e2-4a83-a7af-301481f241bf","http://resolver.tudelft.nl/uuid:629db81d-a2e2-4a83-a7af-301481f241bf","A Process Perspective on Regulation: A Grounded Theory Study into Regulatory Practice in Newly Liberalized Network-Based Markets","Ubacht, J. (TU Delft Information and Communication Technology)","","2016","The transition from a former monopolistic towards a more competitive market in
newly liberalized network-based markets raises regulatory issues. National Regulatory Authorities (NRA) face the challenge to deal with these issues in order to guide the transition process. Although this transition process is widely studied, an integral viewon the regulatory process itself remained absent. This raises the research question how NRAs deal with the regulatory issues while aiming for competition engineering. By following a Grounded Theory approach we analyzed the regulatory practice of three NRAs in newly liberalized mobile telecommunications markets during a five-year period. Our study reveals a high variety in procedural activities that represent the complexity of the regulatory process. Insight into these activities is informative for regulatory processes that need to determine the appropriate governance arrangements in complex, dynamic markets in which the institutions and technology go through a continuous co-evolutionary process. Firmly based in empirical data we present the theoretical concept of mixing and matching which represents the way in which activities ae mixed during the regulatory process and matched with the issue that requires a gvernance arrangement. Further research into regulatory practice in other newly liberalized network-based markets will lead to a formal theory of regulatory practice as a process. This study contributes to the domain of regulatory studies by focusing on the procedural aspects of regulatory practice in newly liberalized network-based markets.","competition; Grounded Theory; liberalization; mobile telecommunications; network-based market; process of decision making; regulatory practice","en","journal article","","","","","","Accepted Author Manuscript","","2017-06-01","","","Information and Communication Technology","","",""
"uuid:4f930f13-1d30-402b-9d61-9956cac25ac9","http://resolver.tudelft.nl/uuid:4f930f13-1d30-402b-9d61-9956cac25ac9","A review of fuel cell systems for maritime applications","van Biert, L. (TU Delft Ship Design, Production and Operations); Godjevac, M. (TU Delft Ship Design, Production and Operations); Visser, K. (TU Delft Ship Design, Production and Operations); Aravind, P.V. (TU Delft Energy Technology)","","2016","Progressing limits on pollutant emissions oblige ship owners to reduce the environmental impact of their operations. Fuel cells may provide a suitable solution, since they are fuel efficient while they emit few hazardous compounds. Various choices can be made with regard to the type of fuel cell system and logistic fuel, and it is unclear which have the best prospects for maritime application. An overview of fuel cell types and fuel processing equipment is presented, and maritime fuel cell application is reviewed with regard to efficiency, gravimetric and volumetric density, dynamic behaviour, environmental impact, safety and economics. It is shown that low temperature fuel cells using liquefied hydrogen provide a compact solution for ships with a refuelling interval up to a tens of hours, but may result in total system sizes up to five times larger than high temperature fuel cells and more energy dense fuels for vessels with longer mission requirements. The expanding infrastructure of liquefied natural gas and development state of natural gas-fuelled fuel cell systems can facilitate the introduction of gaseous fuels and fuel cells on ships. Fuel cell combined cycles, hybridisation with auxiliary electricity storage systems and redundancy improvements are identified as topics for further study.","Emissions; Fuel cells; Fuel processing; Logistic fuels; Maritime application; Ships","en","journal article","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:5faa20f6-049f-4aa7-ade6-5c9996a142e1","http://resolver.tudelft.nl/uuid:5faa20f6-049f-4aa7-ade6-5c9996a142e1","Response of large-scale coastal basins to wind forcing: Influence of topography","Chen, Wen L. (University of Twente); Roos, Pieter C. (University of Twente); Schuttelaars, H.M. (TU Delft Mathematical Physics); Kumar, M. (TU Delft Mathematical Physics); Zitman, T.J. (TU Delft Coastal Engineering); Hulscher, SJMH (University of Twente)","","2016","Because wind is one of the main forcings in storm surge, we present an idealised process-based model to study the influence of topographic variations on the frequency response of large-scale coastal basins subject to time-periodic wind forcing. Coastal basins are represented by a semi-enclosed rectangular inner region forced by wind. It is connected to an outer region (represented as an infinitely long channel) without wind forcing, which allows waves to freely propagate outward. The model solves the three-dimensional linearised shallow water equations on the f plane, forced by a spatially uniform wind field that has an arbitrary angle with respect to the along-basin direction. Turbulence is represented using a spatially uniform vertical eddy viscosity, combined with a partial slip condition at the bed. The surface elevation amplitudes, and hence the vertical profiles of the velocity, are obtained using the finite element method (FEM), extended to account for the connection to the outer region. The results are then evaluated in terms of the elevation amplitude averaged over the basin’s landward end, as a function of the wind forcing frequency. In general, the results point out that adding topographic elements in the inner region (such as a topographic step, a linearly sloping bed or a parabolic cross-basin profile), causes the resonance peaks to shift in the frequency domain, through their effect on local wave speed. The Coriolis effect causes the resonance peaks associated with cross-basin modes (which without rotation only appear in the response to cross-basin wind) to emerge also in the response to along-basin wind and vice versa.","Wind-driven flow; Coastal basins; Resonance; Topography; Idealised process-based modelling; Corioles effect; Frequency response","en","journal article","","","","","","","","","","","Mathematical Physics","","",""
"uuid:470d548a-39c7-4def-8a10-6845f8fbe5f4","http://resolver.tudelft.nl/uuid:470d548a-39c7-4def-8a10-6845f8fbe5f4","High sensitive gas sensors realized by a transfer-free process of CVD graphene","Ricciardella, F. (TU Delft Electronic Components, Technology and Materials); Vollebregt, S. (TU Delft Electronic Components, Technology and Materials); Polichetti, T (ENEA UTTP-MDB); Alfano, B. (ENEA UTTP-MDB; Università degli Studi di Napoli Federico II); Massera, E. (ENEA UTTP-MDB); Sarro, Pasqualina M (TU Delft Electronic Components, Technology and Materials)","Fontana, E. (editor); Ruiz-Zamarreno, C. (editor)","2016","The work herein presented investigates the behavior of graphene-based gas sensors realized by using an innovative way to prepare graphene. The sensing layer was directly grown by chemical vapor deposition on pre-patterned CMOS compatible Mo catalyst and then it was eased on the underlying SiO2 through a completely transfer-free process. Devices with different geometries were designed and tested towards NO2 and NH3 in environmental conditions, i.e. room temperature and relative humidity set at 50%. Furthermore, these gas sensors were also calibrated, resulting in the ability to detect concentrations down to 240 ppb and 17 ppm of NO2 and NH3, respectively. These results are in agreement with the best performances reported in literature for graphene-based sensors. They not only confirm the successful devices fabrication through the transfer-free approach, but also pave the route for large-scale production of MEMS/NEMS sensors.","graphene-based gas sensors; environmental conditions; chemical vapor deposition; transfer-free process","en","conference paper","IEEE","","","","","Accepted author manuscript","","","","","Electronic Components, Technology and Materials","","",""
"uuid:8ee99252-e848-420c-967b-e99dbf4fd89a","http://resolver.tudelft.nl/uuid:8ee99252-e848-420c-967b-e99dbf4fd89a","Innovative system for the construction and management of student residences - Frameup system","De Andrade, Pedro Pimenta (Luleå University of Technology); Lagerqvist, Ove (Luleå University of Technology); Veljkovic, M. (TU Delft Steel & Composite Structures); Simoes, Rui (Universidade de Coimbra); Lundholm, John (Part Construction AB)","","2016","Sweden has a strong demand on the construction of student accommodations and consequently significant efforts have been taken to increase and streamline construction methods. In addition, the fluctuation on the number of students admitted at each year, in each university, lead to periods of house shortage or, in opposition to that, to eventual surplus on the housing market. For these reasons urges finding a fast execution process in construction to fulfil the market needs, together with a housing mechanism of control which balances the students' needs with the housing availability. FRAMEUP system arises thus to solve both problems by combining a modular construction with an innovative execution process. The FRAMEUP buildings uses a steel frame in combination with prefabricated 3D modules (fully equipped and suitable for student accommodations) which are assembled by starting from the roof to the 1st floor. The existence of a lifting system permits the erection of the building, promoting each time the building is lifted, a clearance of one floor height, at ground level, for the assembly of a new floor. The procedure is repeated several times, according to the number of floors, until the 1st floor of the building, the last floor of the execution sequence, is assembled. Alongside with its advantage on the fast execution, the FRAMEUP system allows to efficiently increase or decrease the number of the floors and consequently its permutability with other buildings of the same nature. Thus, assuming a net of FRAMEUP buildings at each university, its permutability system would create the necessary conditions so that the number of floors at each campus would follow the fluctuations of the students' population among the different universities on different periods of time, so to suppress the needs for housing or to avoid the surplus on construction.","Fast execution process; Innovative construction method; Modular construction","en","conference paper","International Association for Bridge and Structural Engineering (IABSE)","","","","","","","","","","Steel & Composite Structures","","",""
"uuid:e544e41a-e1c0-416f-8a78-1e7b39f4edcb","http://resolver.tudelft.nl/uuid:e544e41a-e1c0-416f-8a78-1e7b39f4edcb","Investigation of sulphur isotope variation due to different processes applied during uranium ore concentrate production","Krajkó, Judit (European Commission Joint Research Centre, Institute for Transuranium Elements Karlsruhe); Varga, Zsolt (European Commission Joint Research Centre, Institute for Transuranium Elements Karlsruhe); Wallenius, Maria (European Commission Joint Research Centre, Institute for Transuranium Elements Karlsruhe); Mayer, Klaus (European Commission Joint Research Centre, Institute for Transuranium Elements Karlsruhe); Konings, R. (TU Delft RST/Reactor Physics and Nuclear Materials; European Commission Joint Research Centre, Institute for Transuranium Elements Karlsruhe)","","2016","The applicability and limitations of sulphur isotope ratio as a nuclear forensic signature have been studied. The typically applied leaching methods in uranium mining processes were simulated for five uranium ore samples and the n(34S)/n(32S) ratios were measured. The sulphur isotope ratio variation during uranium ore concentrate (UOC) production was also followed using two real-life sample sets obtained from industrial UOC production facilities. Once the major source of sulphur is revealed, its appropriate application for origin assessment can be established. Our results confirm the previous assumption that process reagents have a significant effect on the n(34S)/n(32S) ratio, thus the sulphur isotope ratio is in most cases a process-related signature.","Nuclear forensics; Origin assessment; Process-related signature; Sulphur isotope; Uranium leaching; Uranium ore concentrate","en","journal article","","","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:5969ca4a-b123-4e05-beab-f3c02db912dc","http://resolver.tudelft.nl/uuid:5969ca4a-b123-4e05-beab-f3c02db912dc","Exploring Homeowners’ Insulation Activity","Friege, J (Wuppertal Institute for Climate); Holtz, G (Wuppertal Institute for Climate); Chappin, E.J.L. (TU Delft Energie and Industrie)","","2016","Insulating existing buildings offers great potential for reducing greenhouse gas emissions and meeting Germany’s climate protection targets. Previous research suggests that, since homeowners’ decision-making processes are inadequately understood as yet, today’s incentives aiming at increasing insulation activity lead to unsatisfactory results. We developed an agent-based model to foster the understanding of homeowners’ decision-making processes regarding insulation and to explore how situational factors, such as the structural condition of houses and social interaction, influence their insulation activity. Simulation experiments allow us furthermore to study the influence of socio-spatial structures such as residential segregation and population density on the diffusion of renovation behavior among homeowners. Based on the insights gained, we derive recommendations for designing innovative policy instruments. We conclude that the success of particular policy instruments aiming at increasing homeowners’ insulation activity in a specific region depends on the socio-spatial structure at hand, and that reducing financial constraints only has a relatively low potential for increasing Germany’s insulation rate. Policy instruments should also target the fact that specific renovation occasions are used to undertake additional insulation activities, e.g. by incentivizing lenders and craftsmen to advise homeowners to have insulation installed.","Spatial Agent-Based Model; Decision-Making Process; Homeowners; Thermal Insulation; Situational Factors; Social Interaction","en","journal article","","","","","","","","","","","Energie and Industrie","","",""
"uuid:dd499c04-5ee8-4884-9b54-69f7b4f6e756","http://resolver.tudelft.nl/uuid:dd499c04-5ee8-4884-9b54-69f7b4f6e756","RRAM Variability and its Mitigation Schemes","Pouyan, P. (TU Delft Computer Engineering; Universitat Politecnica de Catalunya); Amat, Esteve (Universitat Politecnica de Catalunya); Hamdioui, S. (TU Delft Computer Engineering); Rubio, Antonio (Universitat Politecnica de Catalunya)","","2016","Emerging technologies such as RRAMs are attracting significant attention, due to their tempting characteristics such as high scalability, CMOS compatibility and non-volatility to replace the current conventional memories. However, critical causes of hardware reliability failures (such as process variation due to their nano-scale structure) have gained considerable importance for having acceptable memory yields. Such vulnerabilities make it essential to investigate new robust design strategies at the circuit and system level. In this paper we have first reviewed the RRAM variability phenomenon and the variation tolerant techniques at the circuit level. Then we have analyzed the impact of variability on memory reliability and have proposed a variation-monitoring circuit that discerns the reliable memory cells affected by process variability.","RRAM; Reliability; Process Variability; Mitagation; Emerging Memory; Resistive Memory","en","conference paper","IEEE","","","","","","","","","","Computer Engineering","","",""
"uuid:97d034ec-05c2-41c8-a00a-9bef925e8980","http://resolver.tudelft.nl/uuid:97d034ec-05c2-41c8-a00a-9bef925e8980","Aircraft Disposal and Recycle Cost Estimation","Zhao, X. (TU Delft Air Transport & Operations); Verhagen, W.J.C. (TU Delft Air Transport & Operations); Curran, R. (TU Delft Air Transport & Operations)","Borsato, M. (editor); Wognum, N. (editor); Peruzzini, M. (editor); Stjepandić, J. (editor); Verhagen, W.J.C. (editor)","2016","The present study develops a method for the sake of evaluating Disposal and Recycle (D&R) cost in view of the increasing demand in aircraft retirement. Firstly, a process model is extracted. The subordinated cost elements are also identified. Next, the cost aggregations based on the D&R process steps are discussed. Moreover, it proposes an economic indicator to support the determination of the aircraft D&R strategies. The indicator is used to evaluate the economic performance and to facilitate the trade-off studies among different D&R scenarios. This analysis is demonstrated on two aircraft types with two scenarios. In addition, sensitivity analysis evaluating the impact of the salvage value, residual value, D&R cost, and the learning factor is performed. It is found that the engine D&R possesses more economic gains than that of the aircraft. The salvage value and residual value are the main factors which influence the D&R economic performance.","Cost analysis; aircraft disposal and recycle process; disposal and recycle economic indicator","en","book chapter","IOS Press","","","","","","","","","","Air Transport & Operations","","",""
"uuid:a9a91806-a8f2-4c8a-833f-be8bcefbccbb","http://resolver.tudelft.nl/uuid:a9a91806-a8f2-4c8a-833f-be8bcefbccbb","Solving Transition-Independent Multi-agent MDPs with Sparse Interactions","Scharpff, J.C.D. (TU Delft Algorithmics); Roijers, Diederik M. (Universiteit van Amsterdam); Oliehoek, F.A. (Universiteit van Amsterdam; University of Liverpool); Spaan, M.T.J. (TU Delft Algorithmics); de Weerdt, M.M. (TU Delft Algorithmics)","","2016","In cooperative multi-agent sequential decision making under uncertainty, agents must coordinate to find an optimal joint policy that maximises joint value. Typical algorithms exploit additive structure in the value function, but in the fully-observable multi-agent MDP (MMDP) setting such structure is not present. We propose a new optimal solver for transition-independent MMDPs, in which agents can only affect their own state but their reward depends on joint transitions. We represent these de- pendencies compactly in conditional return graphs (CRGs). Using CRGs the value of a joint policy and the bounds on partially specified joint policies can be efficiently computed. We propose CoRe, a novel branch-and-bound policy search algorithm building on CRGs. CoRe typically requires less runtime than the available alternatives and finds solutions to previously unsolvable problems.","Markov Decision Process; Transition-independent Multi-agent MDPs; Reward interactions; Conditional Return Graphs","en","conference paper","American Association for Artificial Intelligence (AAAI)","","","","","Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.","","","","","Algorithmics","","",""
"uuid:20839fb6-0459-42cf-bfdd-05c71e48dc69","http://resolver.tudelft.nl/uuid:20839fb6-0459-42cf-bfdd-05c71e48dc69","A non-parametric Bayesian approach to decompounding from high frequency data","Gugushvili, Shota (Universiteit Leiden); van der Meulen, F.H. (TU Delft Statistics); Spreij, Peter (Universiteit van Amsterdam; Radboud Universiteit Nijmegen)","","2016","Given a sample from a discretely observed compound Poisson process, we consider non-parametric estimation of the density f0 of its jump sizes, as well as of its intensity λ0. We take a Bayesian approach to the problem and specify the prior on f0 as the Dirichlet location mixture of normal densities. An independent prior for λ0 is assumed to be compactly supported and to possess a positive density with respect to the Lebesgue measure. We show that under suitable assumptions the posterior contracts around the pair (λ0,f0) at essentially (up to a logarithmic factor) the nΔ−−−√-rate, where n is the number of observations and Δ is the mesh size at which the process is sampled. The emphasis is on high frequency data, Δ→0, but the obtained results are also valid for fixed Δ. In either case we assume that nΔ→∞. Our main result implies existence of Bayesian point estimates converging (in the frequentist sense, in probability) to (λ0,f0) at the same rate. We also discuss a practical implementation of our approach. The computational problem is dealt with by inclusion of auxiliary variables and we develop a Markov chain Monte Carlo algorithm that samples from the joint distribution of the unknown parameters in the mixture density and the introduced auxiliary variables. Numerical examples illustrate the feasibility of this approach.
to the ship-centric Markov decision process (SC-MDP) framework. This method focuses on identifying the relationships of various decision making scenarios, and how those relationships change through time. The objective is to understand both these relationships and the impact of initial technology selection on lifecycle ballast water compliance. Two metrics are used. First, the optimal lifecycle strategy is presented for technology selection. Second,
the set of dominant eigenvalues is used as a metric to identify the number of unique, initial condition dependent design absorbing paths the process may converge to. Sensitivity studies are performed examining the affect of policy strength on preferred compliance strategy.","Ship design; decision making; Ballast water compliance; Markov decision process; eigenvalue analysis","en","conference paper","PRADS Organising Committee","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:a64363f0-863d-4589-b418-0a9d043640c7","http://resolver.tudelft.nl/uuid:a64363f0-863d-4589-b418-0a9d043640c7","A ship egress analysis method using spectral Markov decision processes","Kana, A.A. (TU Delft Ship Design, Production and Operations); Singer, D.J. (University of Michigan)","Nielsen, U.D. (editor); Jensen et al, J.J. (editor)","2016","This paper introduces a means of performing a ship egress analysis by applying eigenvalue analysis to the ship-centric Markov decision process (SC-MDP) framework. This method focuses on how people egress, the decisions they make under uncertainty, and the interaction between the individuals and the layout of the vessel. The objective is to understand the implications of uncertain decision making of people on general arrangement design. One metric is introduced defined as the ratio between the largest eigenvalue and the second largest. This
decision metric is used to identify and quantify changes in decisions, as well as to help identify system attributes driving those changes in decisions. A case study is presented showing the utility of this method on a ship egress problem. Sensitivity studies are performed examining the affect of uncertainty and rewards on individuals’ decision making behavior","Ship design; decision making; egress analysis; Markov decision process; eigenvalue analysis","en","conference paper","PRADS Organising Committee","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:03d42e93-5bd8-44b5-80a1-a13c52b04ee2","http://resolver.tudelft.nl/uuid:03d42e93-5bd8-44b5-80a1-a13c52b04ee2","Designing with an underdeveloped computational composite for materials experience","Barati, B.; Karana, E.; Hekkert, P.P.M.; Jönsthövel, I.","","2015","In response to the urge for multidisciplinary development of computational composites, designers and material scientists are increasingly involved in collaborative projects to valorize these technology-push materials in the early stages of their development. To further develop the computational composites, material scientists need designer’s inputs regarding the physical properties and temporal behavior of the composite, as embodying an application in a context of use. Effective communication of material knowledge and design knowledge between the two disciplines (material science and design) has proven to be challenging due to their different perspectives on materials. Designing appropriate product concepts requires understanding of composite’s unique characteristics and creating aspired value closely linked to those characteristics. Our design case shows that designing for materials experience can provide a useful framework to organize the design activities around understanding the technical and experiential characteristics of underdeveloped computational composites. Collecting and making tangible samples, outlining and simulating possible physical and temporal behavior and discussing them with material scientists and users improved designer’s understanding of the underdeveloped computational composite. Our study points out the need for clarification of possible aspired values in designing with computational composites and discussions on those, prior to determining the design/development path. Further, it underscores the multifaceted role of prototypes in resolving uncertainty associated with material knowledge and a preferred design path and mobilizing design actions, that entails further investigation.","design process; computational composites; materials experience; design-driven innovation; smart materials","en","conference paper","Design School Kolding","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:3556231a-5aec-469c-bbe8-ed68795b4dc8","http://resolver.tudelft.nl/uuid:3556231a-5aec-469c-bbe8-ed68795b4dc8","Representing nature: Late twentieth century green infrastructures in Paris","Van der Velde, J.R.T.; De Wit, S.I.","","2015","The appreciation of green infrastructures as ‘nature’ by urban communities presents a critical challenge for the green infrastructure concept. While many green infrastructures focus on functional considerations, their refinement as places where concepts of nature are represented and where nature can be experienced and understood, has received little attention in research and praxis. Contemporary urban societies entertain varied and distinctive ideas on nature and their relationship to it, themes explored in contemporary urban park and garden design. These projects can provide insights into the representation, comprehension and experience of nature in green infrastructures. This article expands on contemporary conceptions of nature in urban parks and urban gardens such as those realised in Paris between 1980 and 2000. The projects all display articulated expressions of conceptions of nature, reflecting both a return to the classical garden tradition, as well as elaborations of nature via the sensorial, ‘abundant nature’ and nature as process. These conceptions can be positioned within the theoretical framework of three forms of nature – first nature (wilderness), second nature (cultural landscape) and third nature (garden). In Paris, contemporary parks and gardens not only express new forms of nature, they also form part of a green infrastructure network in their own right. As a series of precise moments connected by rivers and canals, this network differs markedly from prevailing green infrastructure models. The network of parks and gardens in Paris represents a green infrastructural network made up of a layering of historical and contemporary elements connected in compound ways. The completeness of representations and elaborations of nature – gathered in the three natures – can be dissected and spread out over different constructed landscapes in the city, and it is up to the green infrastructure to unite them.","green infrastructure; conceptions of nature; three natures; urban gardens; urban parks; sensorial; context; natural processes","en","book chapter","Delft University of Technology","","","","","","","","Architecture and The Built Environment","Urbanism","","","",""
"uuid:d86b7aa3-5192-4ead-96bc-0901cad90208","http://resolver.tudelft.nl/uuid:d86b7aa3-5192-4ead-96bc-0901cad90208","Parallel creation of vario-scale data structures for large datasets","Meijers, B.M.; Suba, R.; van Oosterom, P.J.M.","","2015","Processing massive datasets which are not fitting in the main memory of computer is challenging. This is especially true in the case of map generalization, where the relationships between (nearby) features in the map must be considered. In our case, an automated map generalization process runs offline to produce a dataset suitable for visualizing at arbitrary map scale (vario-scale) and efficiently enabling smooth zoom user interactions over the web. Our solution to be able to generalize such large vector datasets is based on the idea of subdividing the workload according to the Fieldtree organization: a multi-level structure of space. It subdivides space regularly into fields (grid cells), at every level with shifted origin. Only features completely fitting within a field are processed. Due to the Fieldtree organization, features on the boundary at a given level will be contained completely in one of the fields of the higher levels. Every field that resides at the same level in the Fieldtree can be processed in parallel, which is advantageous for processing on multicore computer systems. We have tested our method with datasets with upto 880 thousand objects on a machine with 16 cores, resulting in a decrease of runtime with a factor 27 compared to a single sequential process run. This more than linear speed-up indicates also an interesting algorithmic side-effect of our approach.","Large datasets; parallel processing; generalization; vario-scale data structures","en","conference paper","ISPRS","","","","","","","","Architecture and The Built Environment","OTB","","","",""
"uuid:6faf146e-fc15-4d00-a287-bba31442d9ca","http://resolver.tudelft.nl/uuid:6faf146e-fc15-4d00-a287-bba31442d9ca","Experimental study of key parameters investigation in turnout crossing degradation process","Liu, X.; Markine, V.L.; Shevtsov, I.; Dollevoet, R.P.B.J.","","2015","The continuous increasing demand of public transportation capacity requires the railway network operating in tight schedule. The high transporting volumes not only aggravate the degradation of railway infrastructure but also shorten the time for maintenance. Well-arranged infrastructure maintenance contributes to the budget reduction and reliability improvement. With the purpose of key parameters investigation in the turnout crossing degradation process, a series of subsequent measurements using instrumented crossing system (ESAH-M) on a 1:15 railway turnout at various stages were performed. The results indicate that wheel/rail impact area narrowed with deepened rail wear. This narrowing is a signal of rail damage. Frequency band pass filtered results describe the condition development of different structures in the turnout crossing section in the test period. Series of more systematic crossing measurements are in progress in a test section in the Netherlands. The ultimate purpose of this study is to form the crossing degradation function to be implemented in the structural health monitoring system (SHMS) for railway turnouts developed at the TU Delft.","railway turnout crossing; degradation process; field measurements; dynamic frequency response function; condition assessment","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:70615b45-4dc0-492f-899a-769a56b46ac9","http://resolver.tudelft.nl/uuid:70615b45-4dc0-492f-899a-769a56b46ac9","Data-driven architectural design to production and operation","Bier, H.H.; Mostafavi, S.","","2015","Data-driven architectural production and operation explored within Hyperbody rely heavily on system thinking implying that all parts of a system are to be understood in relation to each other. These relations are established bi-directionally so that data-driven architecture is not only produced (designed and fabricated) by digital means but also incorporates digital, sensing-actuating mechanisms that ena-ble real-time interaction between (natural or artificial) environments and users. Data-driven architectural production and operation exploit, in this context, the generative potential of process-oriented approach-es wherein interactions between (human and non-human) agents and their (virtual and physical) envi-ronments have emergent properties that enable proliferation of hybrid architectural systems and ecolo-gies.","data-driven design; generative systems; design information modeling; emergent design processes","en","journal article","","","","","","","","","Architecture and The Built Environment","Architectural Engineering and Technology","","","",""
"uuid:a7bc1d77-5620-4b38-932c-c0a43e811fde","http://resolver.tudelft.nl/uuid:a7bc1d77-5620-4b38-932c-c0a43e811fde","Model-based prediction of fluid bed state in full-scale drinking water pellet softening reactors","Kramer, O.; Jobse, M.A.; Baars, E.T.; van der Helm, A.W.C.; Colin, M.G.; Kors, L.J.; van Vugt, W.H.","","2015","Softening at drinking water treatment plants is often realised by fluidised bed pellet reactors. Generally, sand is used as seeding material and pellets are produced as a by-product. To improve to sustainability, research has been carried out to replace the seeding material by re-using grained and sieved calcite pellets as seeding material. An explicit fluidisation model is developed to predict the fluid bed state in fluid bed pellet softening reactors with calcite as seeding material. The fluidisation theory is extended in a model whereby soft sensors are derived and experimentally tested for a wide range of seeding material and pellets. With the soft sensors porosity, particle size and pressure drop can explicitly be calculated. Pilot research has been carried out to calibrate and full-scale experiments to validate the fluidisation models. Four different fluidisation models were reviewed from which the original Richardson-Zaki fluid bed model has been selected as the best explicit fluidisation model to predict the porosity, particle size and pressure drop. Applying a discretisation model for the fluid bed pellet reactor, the current operation of the treatment softening can be improved by estimating the fluidisation, pressure drop behaviour and particle profile. Waternet can apply the Richardson-Zaki fluid bed model in practice for building a soft sensor to achieve optimal bed fluid conditions for the softening process.","calcite; carmán-közény; drinking water; ergun; fluidisation; garnet pellets; modelling; pellet softening; process optimisation; richardson-zaki; soft sensor; terminal settling","en","conference paper","IWA","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:4085155b-679c-4430-8304-0bcdd88dad94","http://resolver.tudelft.nl/uuid:4085155b-679c-4430-8304-0bcdd88dad94","Optimising mechanical behaviour of new advanced steels based on fine non-equilibrium microstructures","HajyAkbary, F.","Sietsma, J. (promotor)","2015","This Ph.D. thesis investigates the relation between microstructural and mechanical properties of Advanced High Strength Steels (AHSS), with the goal of developing a microstructure with optimised mechanical properties. Among different grades of AHSS, Quenching and Partitioning (Q&P) steel which is composed of thin films of retained austenite between carbon depleted martensite laths, is selected. Different Q&P microstructures were developed in a 0.3C-1.6Si-3.5Mn (wt.%) steel with non-homogenous chemical composition. Aiming an adequate control of the microstructural evolution during the Q&P process, the heat treatments were performed on small specimens in the dilatometer. In view of this, the thesis is divided into three parts: chapter 2 investigates the influence of specimen size on the tensile behaviour of steels, chapters 3 and 4 outline the methods to characterize the microstructural properties of the Q&P specimens and chapters 5 and 6 discuss the relation between the mechanical and microstructural properties. Chapter 2 studies the influence of the specimen geometry on the mechanical behaviour of steels. Miniature and standard specimens from different grades of steels were tested in tension. The results show that while the specimen geometry has insignificant influence on the actual elastic strain of the materials, the elastic strain which is measured from the crosshead displacements is higher than the actual strain. The reason is that the elongation of the fillet zones and the machine compliance are recorded along with the specimen elongation as the crosshead displacement. A mathematical model is developed to correct the influence of the elastic strain of the fillet-zones and the machine compliance. For different types of steels, the calculated elastic strain and the strain measured on the standard specimens are in good agreement and consequently the proposed model can be used for calculating the elastic strain of the miniature specimens from the crosshead displacement. Moreover, it was found that the yield strength, ultimate tensile strength and uniform elongation of steels are almost independent of the specimen gauge length. Total elongation increases with decreasing the specimen gauge length. This is a result of the calculation method, since the total elongation is calculated by dividing the elongation of the specimen by the initial gauge length, which is smaller for miniature specimens. Since the post-uniform elongation is independent of the specimen parallel zone, the measured total elongation is higher in miniature specimens. A method was applied for converting the total elongation of the miniature specimen to the total elongation obtained from standard ones. In chapter 3 an improved method is developed to measure dislocation density of a lath martensitic steel by applying X-ray diffraction profile analysis. This was done by combining the modified Williamson-Hall equation (MWH) and modified Warren-Averbach (MWA) methods. The proposed method is independent of limitations due to the considered range of the Fourier length. This method leads to a dislocation density that is in good agreement with the dislocation density determined based on the dislocation strengthening. The MWH method, under the assumption of a fixed value for the dislocation distribution parameter, was applied to calculate the dislocation density. The calculated dislocation densities are in the range of the values determined from the dislocation strengthening. However, it was found that the combined MWH and MWA method can be used as a quantitative method for dislocation density calculations, with a better accuracy than just the MWH method. Chapter 4 investigates microstructural development during application of the Q&P process in a steel with inhomogeneous chemical composition. In place EPMA and SEM analysed show that during the initial quenching, in Mn/C/Si-poor regions higher fractions of initial martensite are formed than in Mn/C/Si-rich regions. This leads to a non-homogenous distribution of initial martensite in the matrix. Lowering the quenching temperature, a higher fraction of austenite transforms to initial martensite and therefore microstructural banding decreases. Moreover, it was found that precipitation of ?-carbides during the first quenching reduces the concentration of carbon in solid solution in martensite. Regarding the fact that the partitioning of carbon present in carbides requires the decomposition of the carbides and in view of slow kinetics of carbide decomposition, full completion of the carbon partitioning process can be achieved only after isothermal holding times longer than predicted by simulations of carbon partitioning. A method was developed to determine carbon concentration of secondary martensite, martensite that is formed during final quenching, on the basis of dilatometry data. Additionally, it was found that at the initial stage of isothermal holding, carbon partitioning stabilizes a certain fraction of austenite. This stable austenite does not decompose to bainite during the isothermal holding and is retained at room temperature. In the specimens with higher quenching temperature, carbon partitioning stabilizes a larger fraction of austenite and therefore a lower fraction of bainite is formed. Furthermore, bainite formation reduces the volume fraction of secondary martensite, formed from unstable austenite, by two mechanisms. First, bainite formation is accompanied by carbon diffusion from bainite to austenite. This results in stabilization of a part of the unstable austenite. Secondly, bainite forms from unstable austenite and consequently decreases the fraction of unstable austenite. Chapter 5 studies the relation between the yield strength and microstructural properties of the constituent phases i.e. retained austenite, initial martensite, bainite and secondary martensite. The in-situ X-ray diffraction analysis showed that there is an insignificant austenite to martensite transformation prior as well as during yielding of steels. Therefore, the induced martensite formation does not have significant influence on the yield strength. Yield strength of initial martensite, bainite and secondary martensite which was estimated by applying physical models are higher than the total yield strength of specimens. The summation of the normalised yield strength of the constitute phases gives an acceptable approximation of the total yield strength. In this matter, the reduction of the yield strength of the Q&P specimens by increasing the quenching temperature could be related to the decrease of the dislocation density of initial martensite. Chapter 6 showed that a good combination of high strength and elongation is obtained by decreasing the quenching temperature which provides a high fraction of initial martensite with high dislocation density. Moreover, microstructures with high fraction of initial martensite have higher fraction of retained austenite as well as low fraction of secondary martensite, as a brittle phase, and therefore show high elongation. Mechanical properties of the developed microstructures can compete with other types of AHSS steels.","quenching an partitioning process; advance high strength steel; microtensile test; microstrucutral analysis","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Materials Science & Engineering","","","",""
"uuid:7f8ce7c6-434a-42a2-9285-e22ce6724121","http://resolver.tudelft.nl/uuid:7f8ce7c6-434a-42a2-9285-e22ce6724121","Analog Integrated Circuit and System Design for a Compact, Low-Power Cochlear Implant","Ngamkham, W.","Serdijn, W.A. (promotor); Frijns, J.H.M. (promotor)","2015","Cochlear Implants (CIs) are prosthetic devices that restore hearing in profoundly deaf patients by bypassing the damaged parts of the inner ear and directly stimulating the remaining auditory nerve fibers in the cochlea with electrical pulses. This thesis describs the electronic circuit design of various modules for application in CIs in order to save area, reduce power consumption and ultimately move towards a fully implantable CI. To enhance the perception of tonal languages (such as Thai and Chinese) and music, an effort to realize the speech processor in a CI that imitates the inner hair cells and the auditory nerve behaviour more precisely should be made. According to recent physiological experiments, the envelope and phase of speech signals are required to enhance the perceptive capability of a CI implanted patient. The design of an analog complex gammatone filter is introduced in order to extract both envelope and phase information of the incoming speech signals as well as to emulate the basilar membrane behavior. A subthreshold Gm ? C circuit topology is selected in order to verify the feasibility of the complex gammatone filter at very low power operation. Several speech encoding strategies like continuous time interleaved sampling (CIS), race-to-spike asynchronous interleaved sampling (AIS), phase-locking zero-crossing detection (PL-ZCD) and phase-locking peak-picking (PL-PP) are studied and compared in order to find a compact analog speech processor that allows for full implantation and is able to convey both time and frequency components of the incoming speech to a set of electrical pulse stimuli. A comparison of the input and reconstructed speech signals in terms of correlation factor and hardware complexity pointed out that a PL-PP strategy provides a compact solution for the CI electronic hardware design since this strategy does not require a high precision envelope detector. A subthreshold CMOS peak-instant detector to be used in a PL-PP CI processor has been designed. Circuit simulations, using AMIS 0.35 um technology, show that the proposed detector can be operated from a 1.2 V supply and consumes less than 1 uW static power for detecting a 5 kHz input signal. The output signal of the detector together with the input signal amplitude (the output of the band-pass of each channel) is expected to be used as control parameters in a stimulator for apical cochlear electrodes. To design stimulators that are implanted inside the body, there are very strict requirements on the size and power consumption. Therefore, it is important to convey as much charge as possible into the tissue while using an as low as possible supply voltage to minimize power consumption. A novel method for maximizing the charge transfer for constant current neural stimulators has been presented. This concept requires a few additional current branches to form two feedback loops to increase the output resistance of a MOS current mirror circuit that requires only one effective drain-source voltage drop. The main benefit we achieve for neural stimulation is the larger amount of charge that can be conveyed to the stimulation electrode. In other words, for the same amount of charge required, the supply voltage can be reduced. Also, a compact programmable biphasic stimulator for cochlear implants has been designed by using the the above concept and implemented in AMS 0.18 um high-voltage CMOS IC technology, using an active chip area of only 0.042 mm^2. Measurement results show that a proper charge balance of the anodic and cathodic stimulation phases is achieved and a dc blocking capacitor can be omitted. The resulting reduction in the required area enables many stimulation channels on a single die. As the work laid out in this thesis produced only stand-alone modules, future work should focus on combining all these modules together to form an analog CI processor suitable for a fully implantable cochlear implant.","Cochlear Implants; Analog Band-pass filter; Gammatone filter; Analog peak detector; Neural stimulators; Current generator; Current sources; Biphasic stimulator; Speech processing strategies","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","","",""
"uuid:1e169876-9b78-4ea0-be4a-4794be886bf4","http://resolver.tudelft.nl/uuid:1e169876-9b78-4ea0-be4a-4794be886bf4","Blind beamforming techniques for global tracking systems","Zhou, M.","Van der Veen, A.J. (promotor)","2015","After the development in the past 120 years since the invention of the first radio transmission, worldwide wireless communication systems are nowadays part of daily life. Behind the shining and astonishing achievement of modern communication systems, the exhaustion of existing frequency spectrum resources has been a concern. In higher frequency bands, the most advanced techniques are in development for the fifth cellular mobile communication system (5G) to meet rapid growth in its applications. The 5G system sits in the millimeter-wave band and consumes a wider bandwidth to offer a higher data transmission speed and a larger system capacity. Frequency/time/code division multiple accesses (FDMA/TDMA/CDMA) are successful techniques to reuse and to save the frequency spectrum. However, in lower frequency bands, existing communication systems face similar unprecedented demands to accommodate more users in new applications. These growing demands exceed the designed system capacity and thus call for innovative solutions while keeping compatibility to the current setup to reduce the cost of users. For example, in the automatic identification system (AIS), satellite receivers are being used for expanding the service coverage of ship tracking to the global range, and similarly in the automatic dependent surveillance-broadcast (ADS-B) system for aircraft tracking. These systems are narrowband and originally designed in the last century, but they will continue to run for at least another couples of years without major updating of the user-side equipment. The new application of AIS considered in this thesis is Satellite AIS. The satellite runs in the low-earth orbit (LEO). On the satellite, receiving AIS signals becomes much more difficult than before: one has to combat in-cell and inter-cell interfering sources from the system itself. Interference suppression is the main topic of this thesis. Narrowband spatial beamforming techniques for antenna arrays are candidate solutions to this challenge. This thesis tries to develop new beamforming techniques with a simple structure and a low computational complexity. With these techniques, this thesis establishes a framework of multiuser reception for Satellite AIS. The new beamforming techniques are proposed through three consecutive chapters associated with their foundation, evolution, and application. In Chapter~1, the background and the issues brought by Satellite AIS are introduced. Related literature is reviewed. The contribution of this thesis is shown. In Chapter~2, the beamforming problem for signals in additive white noise is discussed. As a basic tool for the proposed algorithms in this thesis, a signed URV algorithm (SURV) is proposed for the basic problem of principal subspace computation and tracking as a replacement of the singular value decomposition (SVD). The updating and downdating of SURV is direct and simple. SURV has no issue of numerical stability unlike previous algorithms in linear algebraic and shows consistent performance in both stationary and nonstationary cases. This chapter shows how SURV is derived and provides its theoretical support. In Chapter~3, the beamforming techniques for interference suppression in nonstationary scenarios are discussed. New blind beamforming techniques are proposed for separating overlapping packets in such scenarios. The connections between subspace intersection, oblique projection, the generalized SVD (GSVD), the generalized eigenvalue decomposition (GEVD), and SURV are exposed. SURV is used as one of the basic tools for the beamforming techniques. Simulation and experimental results of the proposed algorithms are shown. In Chapter~4, based on the proposed algorithms in Chapter~3, a special blind beamforming technique enabling tracking for the multi-user receiver for Satellite AIS is proposed. The proposed algorithm is based on SURV. Results of the receiver in a software simulation model and on a hardware platform are provided. In the remaining part of the thesis, the work on developing the software simulation model and constructing the hardware platform is presented. The outputs of the work are used for the verification and validation of the proposed algorithms in this thesis. In Chapter~5, the method of developing the software testbed (simulation model) is presented. This testbed is built by using several tools including SystemC-AMS and MATLAB. The software implementation of the receiver is done in MATLAB and then translated into C++. This chapter first shows a lightweight version of the testbed in the hope that readers can learn and construct their own simulation model from scratch. The chapter also shows the possibility that the lightweight version will be extended and reconfigured to a more sophisticated model for the practical global ship distribution and satellite orbit from launched satellites like what is done in Chapter~4. In Chapter~6, the structure of the hardware platform is presented. This chapter gives an example on how to build array receivers from available equipment. This platform uses an array of modified commercial RF frontends to downconvert the AIS signals to baseband. Sampled data are fed into PC and processed in MATLAB. The decoded AIS messages are analyzed and visualized on maps.","Beamforming; Tracking; Array signal processing; Signed URV algorithm; Matrix decomposition; SVD; GSVD; Satellite; Automatic identification system; Mixed signal system modeling; SystemC; SystemC-AMS","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Circuits and Systems","","","",""
"uuid:d0c61fd2-804b-4827-ae8a-e0e93d282a56","http://resolver.tudelft.nl/uuid:d0c61fd2-804b-4827-ae8a-e0e93d282a56","Next-generation satellite gravimetry for measuring mass transport in the Earth system","Teixeira Encarnação, J.","Klees, R. (promotor); Ditmar, P.G. (promotor)","2015","The main objective of the thesis is to identify the optimal set-up for future satellite gravimetry missions aimed at monitoring mass transport in the Earth’s system.The recent variability of climatic patterns, the spread of arid regions and associ- ated changes in the hydrological cycle, and vigorous modifications in the ice coverage at polar regions have been attributed to anthropogenic influence. As such, it is important to continue monitoring the Earth system in order to properly constrain and improve the geophysical and climatic models and to better interpret the causes and consequences of climate change. Satellite gravimetric data are also exploited to further the knowledge on other geophysical processes with high societal and scientific impact, such as megathrust earthquakes, drought monitoring and Glacial Isostatic Adjustment (GIA). The primary focus of the study is to properly quantify the errors in the gravimetric data to be collected by future gravimetric satellites, in particular those related to the measurement of the temporal gravitational field variations. One source of errors comes from the background force models describing rapid mass transport processes; another error source is related to the background static gravity field model. These models are used to complement geophysical signals that are missing or improperly represented in the gathered satellite gravity data. However, they are built on the basis of in situ data that lack global coverage and, therefore, suffer from a limited accuracy (particularly in remote areas). Although the fidelity of these models is constantly improving, the satellite data accuracy is also increasing with the on-going technological and methodological advances. etermining the net effect of these conflicting trends is the main driver to study the propagation of errors in background models into the estimated models. Other sources of errors arise from imperfections of the on-board sensors, such as the ranging sensor or the Global Navigation Satellite System (GNSS) receiver. The influence of the sensors errors is divided into the major independent contributions, with the corresponding frequency description, and assembled into a detailed noise model. The model predicts the effects of i) the inaccurately known orbital positions, ii) the noise in the inter-satellite metrology system, iii) the noise in the on-board accelerometers, iv) the wrongly-estimated Line of Sight (LoS) frame accelerations resulting from errors in the radial orbital velocities, and v) errors in the orientation of the LoS vector. The model has been validated with the help of actual Gravity Recov- ery And Climate Experiment (GRACE) a posteriori residuals, which are compared to the output of the noise model considering a simulated GRACE mission. Therefore, once the assumptions describing sensor and model accuracies are modified to reflect those predicted for future gravimetric missions, it is reasonable to expected that this noise model reproduces realistic errors for those missions. Also relevant is the analysis of the sensitivity of the data in terms of isotropy. As learned from the GRACE mission, the nearly-constant North-South alignment of the measurement direction makes the data less sensitive to gravitational changes along the East-West direction. Although formally not an error itself, the anisotropic data sensitivity amplifies the errors in the data. The sensor and model errors are propagated firstly to the gravimetric data and further to the gravitational field, in full-scale simulations of the cartwheel, trailing and pendulum satellite formations. The results are analysed in terms of i) the observation error in the frequency domain and ii) the estimated gravity field model error in the frequency and spatial domains. The error budgets for these formations are also quantified. The results indicate that the pendulum formation with no along-track displacement is least sensitive to model and sensors errors, in particular to temporal aliasing. The conducted study reveals serious limitations in the cartwheel mission concept, since the orbit errors are considerably amplified by the diagonal components of the gravity gradient tensor, while the pendulum and trailing formations are only affected by (small) off-diagonal components. The spatial error patterns provide valuable clues on how to best combine the different formation geometries in order to produce minimum anisotropy in the sensitivity of collected data. The data from the pendulum formation show some anisotropic sensitivity but the combination of such data with those from a trailing formation, such as the GRACE Follow On (GFO), would eliminate this disadvantage (as well as the low accuracy near the poles of the pendulum formation). Unlike alternative proposals for dual-pair satellite missions, such as the Bender constellation, the dual trailing/pendulum constellation would provide global coverage in case of failure of one satellite pair and dense temporal sampling at high latitudes. Furthermore, the data from gravimetric missions are shown to benefit greatly from the data gathered by numerous non-dedicated satellites. From the conducted simulations, it is predicted that the achievable temporal resolution is increased to a few days for the degrees below 10 and, crucially, with no significant level of temporal aliasing. Longer estimation periods allow for higher degrees to be estimated, with greatly reduced effects of temporal aliasing in the resulting gravity field models.","Earth Observation; Satellite Geodesy; Time-varying Gravity Field; Mass Transport Processes; GRACE follow-on","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience & Remote Sensing","","","",""
"uuid:e8a04372-4c55-4b5f-9bc3-aaab73fe649d","http://resolver.tudelft.nl/uuid:e8a04372-4c55-4b5f-9bc3-aaab73fe649d","Multifaceted Approaches to Music Information Retrieval","Liem, C.C.S.","Hanjalic, A. (promotor)","2015","Music is a multifaceted phenomenon: beyond addressing our auditory channel, the consumption of music triggers further senses. Also in creating and communicating music, multiple modalities are at play. Next to this, it allows for various ways of interpretation: the same musical piece can be performed in different valid ways, and audiences can in their turn have different reception and interpretation reactions towards music. Music is experienced in many different everyday contexts, which are not confined to direct performance and consumption of musical content alone: instead, music frequently is used to contextualize non-musical settings, ranging from audiovisual productions to special situations and events in social communities. Finally, music is a topic under study in many different research fields, ranging from the humanities and social sciences to natural sciences, and—with the advent of the digital age—in engineering as well. In this thesis, we argue that the full potential of digital music data can only be unlocked when considering the multifaceted aspects as mentioned above. Adopting this view, we provide multiple novel studies and methods for problems in the Music Information Retrieval field: the dedicated research field established to deal with the creation of analysis, indexing and access mechanisms to digital music data. A major part of the thesis is formed by novel methods to perform data-driven analyses of multiple recorded music performances. Proposing a top-down approaches investigating similarities and dissimilarities across a corpus of multiple performances of the same piece, we discuss how this information can be used to reveal varying amounts of artistic freedom over the timeline of a musical piece, initially focusing on the analysis of alignment patterns in piano performance. After this, we move to the underexplored field of comparative analysis of orchestral recordings, proposing how differences between orchestral renditions can further be visualized, explained and related to one another by adopting techniques borrowed from visual human face recognition techniques. The other major part of the thesis considers the challenge of auto-suggesting suitable soundtracks for user-generated video. Building on thoughts in Musicology, Media Studies and Music Psychology, we propose a novel prototypical system which explicitly solicits the intended narrative for the video, and employs information from collaborative web resources to establish connotative connections to musical descriptors, followed by audiovisual reranking. To assess what features can relevantly be employed in search engine querying scenarios, we also further investigate what elements in free-form narrative descriptions invoked by production music are stable, revealing connections to linguistic event structure. Further contributions of the thesis consist of extensive positioning of the newly proposed directions in relation to existing work, and known practical end-user stakeholder demands. As we will show, the paradigms and technical work proposed in this thesis managed to push significant steps forward in employing multimodality, allowing for various ways of interpretation and opening doors to viable and realistic multidisciplinary approaches which are not solely driven by a technology push. Furthermore, ways to create concrete impact at the consumer experience side were paved, which can be more deeply acted upon in the near future.","music information retrieval; multimedia information retrieval; music data processing; multimodality; multidisciplinarity; performance analysis; connotation; narrative; use context","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","","",""
"uuid:20148a99-5eb0-4f0a-8bda-9e75c04cc383","http://resolver.tudelft.nl/uuid:20148a99-5eb0-4f0a-8bda-9e75c04cc383","Quantum measurement and real-time feedback with a spin-register in diamond","Blok, M.S.","Hanson, R. (promotor)","2015","Gaining precise control over quantum systems is crucial for applications in quantum information processing and quantum sensing and to perform experimental tests of quantum mechanics. The experiments presented in this thesis implement quantum measurements and real-time feedback protocols that can help to achieve these goals using single electron and nuclear spins associated with the Nitrogen Vacancy (NV) center in diamond. Here we demonstrate that adaptive measurements allow for the manipulation of a quantum system using only measurement backaction and that they can enhance the performance of a single spin magnetometer. Furthermore, by creating entanglement and performing teleportation between two distant NV centers, we implement two elementary operations between two nodes of a quantum network.","Quantum measurement; NV centers; Quantum information processing; Sensing; Adaptive measurements","en","doctoral thesis","","","","","","","","","Applied Sciences","Kavli Institute of Nanoscience Delft","","","",""
"uuid:c0e626e7-fe33-48d2-96b8-ebc46ae6da40","http://resolver.tudelft.nl/uuid:c0e626e7-fe33-48d2-96b8-ebc46ae6da40","A Methodology to Support Decision-Making Towards an Energy-Efficiency Conscious Design of Residential Building Envelope Retrofitting","Konstantinou, T.","","2015","Over the next decade investment in building energy savings needs to increase, together with the rate and depth of renovations, to achieve the required reduction in buildingrelated CO2 emissions. Although the need to improve residential buildings has been identified, guidelines come as general suggestions that fail to address the diversity of each project and give specific answers on how these requirements can be implemented in the design. During early design phases, architects are in search of a design direction to make informed decisions, particularly with regard to the building envelope, which mostly regulates energy demand. To result in an energy-efficient residential stock, this paper proposes a methodology to support refurbishment strategies design. The methodology, called “façade refurbishment toolbox (FRT) approach”, is based on compiling and quantifying retrofitting measures that can be also seen as “tools” used to upgrade the building’s energy performance. The result of the proposed methodology enables designers to make informed decisions that lead to energy and sustainability conscious designs, without dictating an optimal solution, from the energy point of view alone. Its applicability is validated through interviews with refurbishment stakeholders.","refurbishment; residential energy upgrade; design process; OA-Fund TU Delft","en","journal article","MDPI","","","","","","","","Architecture and The Built Environment","Architectural Engineering +Technology","","","",""
"uuid:57dd907b-d9d3-4f28-aced-6c556b68568e","http://resolver.tudelft.nl/uuid:57dd907b-d9d3-4f28-aced-6c556b68568e","Product Innovation in Sustainability-Oriented New Ventures: A Process Perspective","Keskin, D.","Brezet, J.C. (promotor); Wever, R. (promotor)","2015","Despite the recognition that new ventures are potential candidates of creating innovations necessary for sustainability, little is know on how they actually engage in this journey. Sustainability-oriented new ventures are confronted with high levels of uncertainty that stem from the liabilities of being new and small, as well as demonstrating and justifying sustainability benefits of new products to customers and stakeholders. Consequently, they are often not able to identify a promising product-market combination at the outset of the product innovation process, and instead progressively define their business idea. The objective of this exploratory study is to gain a profound understanding of this process: (1) How can the product innovation process in new ventures be described? (2) What explains the similarities and differences among the product innovation processes of new ventures? (3) How does the sustainability motivation of the entrepreneurs influence the product innovation process? To fully understand how new ventures translate sustainable product ideas into new businesses, a process-oriented case study research approach is adopted with a focus on the relationships between key concepts identified in innovation and entrepreneurship literature. The main contributions of this study include: (1) a descriptive model to describe the product innovation process in new ventures, (2) a conceptual model to explain the similarities and differences among the product innovation process in new ventures, and (3) insights into how sustainability motivation of entrepreneurs influences the product innovation process. This study provides entrepreneurs, particularly novices, design practitioners and students who are considering starting a new venture based on a sustainable product idea with relevant new insights. In particular, they concern understanding the different type of decision-making logics and their implications for the product development process. Insights into this process can support firms in using different approaches simultaneously and interchangeably, both during the innovation process over time and under different conditions of uncertainty. This enables them to engage in different actions, such as design experiments and stakeholder interactions, with different purposes more effectively. Finally, this study recommends new ventures to combine their strong vision for sustainability with affordable small steps in order to create room for experimentation and increase learning effects in relation to sustainability.","product innovation; new ventures; sustainability motivation; decision-making; effectuation; process research","en","doctoral thesis","","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:5c124533-3f65-486c-83a7-04b0e5af94bb","http://resolver.tudelft.nl/uuid:5c124533-3f65-486c-83a7-04b0e5af94bb","A physical model to describe the distribution of adhesion strength in MEMS, or why one MEMS device sticks and another ‘identical’ one does not","Van Spengen, W.M.","","2015","In this paper a model is presented that describes the distribution of adhesion values typically experimentally observed for different MEMS devices that have been fabricated in the same way. This spread is attributed to the fact that different devices differ in the details of their surface roughness, even if these surface roughnesses are modeled as coming from the same ‘parent’ stochastic process. Using Monte Carlo simulations, the effect of surface roughness and relative humidity has been evaluated in detail, both on the expected mean value of the surface interaction energy between the MEMS surfaces, and the expected spread on this value from device to device. By comparing the new model to existing literature reporting this experimentally observed spread, we have found excellent agreement between the experimental spread observed, and the spread calculated with the theoretical model using Monte Carlo simulations. This work paves the way to detailed adhesion failure predictive modeling. It may be used to assess the reliability of MEMS designs that rely on contacting surfaces for their operation, but have a limited restoring force available to separate the surfaces when in contact.","MEMS, adhesion/stiction; distribution; surface roughness; stochastic processes; failure predictive modeling","en","journal article","IOP Publishing","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","",""
"uuid:f2eda0d4-08d4-4682-b48c-ee2cc064fbd3","http://resolver.tudelft.nl/uuid:f2eda0d4-08d4-4682-b48c-ee2cc064fbd3","Underpinning the Observational Method through Process Modelling and Procurement","Le Masurier, J.","","2015","Geotechnical engineering designs are often predefined before construction commences, in an attempt to eliminate uncertainty. Such predefined design can lead to poor value, either due to waste of resources from over design, when opportunities are missed for optimising the design during construction, or due to the delay and additional cost of dealing with unforeseen ground conditions. The Observational Method (OM) provides an alternative design approach, to proactively manage the uncertainty associated with ground conditions, using a flexible design that is able to be adapted to suit the actual conditions found during construction. Feedback from observations and monitoring allows the designer to maximise the opportunities and minimise the risks. Case studies of applications of this approach are presented to demonstrate the significant benefits that have been derived from managing geotechnical uncertainty in this way. The OM relies on the integration of construction processes and teams, best achieved through a collaborative style of management, rather than under the types of relationship formed under traditional fragmented procurement processes. A synopsis is given of the influence of procurement options on the implementation of the OM. A danger with collaborative approaches to project delivery is that responsibilities are not clearly defined. An example where failure in the management of the process led to a tunnel collapsing during construction is used to illustrate the importance of designing the process and clearly defining responsibilities in a project team that chooses to use an OM design. A process modelling methodology is described which facilitates the definition and mutual understanding of processes and responsibilities in a project team, thus ensuring robustness in the OM process. A further practical example is given of an application of this methodology within a project team using the OM for a deep basement construction.","observational-method uncertainty management process-models procurement","en","conference paper","","","","","","","","","","","","","",""
"uuid:b180a030-74bb-4e28-a6ef-4bee8cea6070","http://resolver.tudelft.nl/uuid:b180a030-74bb-4e28-a6ef-4bee8cea6070","A Consideration on Deterioration Model for Cold Region Tunnel Lining Based on Life-cycle Concept","Sutoh, A.; Maruyama, O.; Kanakiyo, H.T.; Sato, T.","","2015","This paper proposes the tunnel lining deterioration model which is based upon actual inspection data in order to carry out strategic maintenance and to rationalize life cycle cost analysis for tunnel structures. The evaluation value of deteriorating tunnel structures is a non-stationary stochastic process, and reliability problems of such structures are needed to the consideration of the future risk. In Japan, the probability that an earthquake occurs is high compared with other countries. A risk analysis of seismic motion and technical components as well as damages associated with cost variables have to be dealt with. The model of the tunnel management system presented in the paper is applied to the asset management of the cold region road tunnels. This research will be developed to the efficient tunnel maintenance system and a quantitative criterion from pictures of tunnel lining using the life cycle cost analysis. And also, in order to describe the deterioration model of the life cycle assessment is needed to consider the large-scale earthquake phenomena in Japan.","tunnel structure; deterioration model; stochastic process; life-cycle cost","en","conference paper","","","","","","","","","","","","","",""
"uuid:3460b1dc-e69f-4406-8837-0112c2ec301a","http://resolver.tudelft.nl/uuid:3460b1dc-e69f-4406-8837-0112c2ec301a","Whole failure process analysis for jointed rock masses based on coupling method of DDA and FEM","Su, H.Z.; Wen, Z.; Yang, M.","","2015","The elastic-plastic mechanical behaviour is a typical characteristic of rock mass. The load action will bring on the local destruction, large deformation, even whole failure of rock mass with the discontinuous mediums (e.g. joint, crack and fault). It is a coupling process of the continuous deformation and the discontinuous deformation. The discontinuous deformation analysis (DDA) and finite element method (FEM) are combined to build the elastic-plastic mechanical model. The rock block is divided into the finite element meshes. FEM is used to solve the displacement field and the stress field inside the block. The contacts between the deformable blocks are simulated DDA method. The parametric variational principle is derived to analyze the elastic-plastic problem with above coupling model. The theoretical calculating formulae are obtained from the variational principle. The governing equations of mechanical model are established. The proposed method coupling DDA and FEM is used to implement the simulation and analysis for the deformation process of jointed rock masses around one underground cavern. It is easy to simulate the whole process from plastic to elastic yielding failure, and to the large deformation under the condition of plastic flow or instability.","coupling method; jointed rock masse; failure process; discontinuous deformation analysis; finite element method","en","conference paper","","","","","","","","","","","","","",""
"uuid:a018bdc2-6f9d-4fbe-8829-a83b65f5a5e5","http://resolver.tudelft.nl/uuid:a018bdc2-6f9d-4fbe-8829-a83b65f5a5e5","Road pricing policy process: The interplay between policy actors, the media and public","Ardiç, O.","Van Wee, G.P. (promotor)","2015","Although road pricing policies are generally seen as an effective measure to deal with transport related problems (e.g. congestion), the number of implemented road pricing schemes is relatively limited. The thesis aims to gain insights into complex interplay between policy actors, media and public in road pricing policy processes to understand the success or failure of the introduction of road pricing policies.","road pricing policy; policy process; media; policy actors","en","doctoral thesis","","","","","","","","2015-09-23","Technology, Policy and Management","Transport and Logistics","","","",""
"uuid:db6cef38-4c70-43fe-b331-6845963496bf","http://resolver.tudelft.nl/uuid:db6cef38-4c70-43fe-b331-6845963496bf","Application of full field optical studies for pulsatile flow in a carotid artery phantom","Nemati, M.; Loozen, G.B.; Van der Wekken, N.; Van de Belt, G.; Urbach, H.P.; Bhattacharya, N.; Kenjeres, S.","","2015","A preliminary comparative measurement between particle imaging velocimetry (PIV) and laser speckle contrast analysis (LASCA) to study pulsatile flow using ventricular assist device in a patient-specific carotid artery phantom is reported. These full-field optical techniques have both been used to study flow and extract complementary parameters. We use the high spatial resolution of PIV to generate a full velocity map of the flow field and the high temporal resolution of LASCA to extract the detailed frequency spectrum of the fluid pulses. Using this combination of techniques a complete study of complex pulsatile flow in an intricate flow network can be studied.","image processing; medical optics and biotechnology; optical devices; scattering; OA-Fund TU Delft","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","ImPhys/Imaging Physics","","","",""
"uuid:59fdb616-f4ee-4099-a021-1869fe4a5ba0","http://resolver.tudelft.nl/uuid:59fdb616-f4ee-4099-a021-1869fe4a5ba0","Learning from the Trenches of Embodiment Design: The Designing, Prototyping, and Fabricating a Large Interactive Display","Verlinden, J.C.; Saakes, D.; Luxen, R.F.","","2015","Background The advent of ubiquitous computing requires us to reconsider all aspects of industrial design engineering – to invent, package and optimize such products, services and experiences to society. This project was devised to bridge these in a compelling and magical prototype, called the Kinetic Mirror, a mirror that can not only mimic color but also shape in front of it. It builds upon the efforts performed in the field of projector-based augmented reality and natural design interfaces, and it showcases our ideas of future prototyping of design concepts. Methods This article describes the complexity of engineering when embodying and producing such interactive systems to disseminate design knowledge. Specifically, we reflect on the conceptualization and development of the Kinetic Mirror: a three-dimensional display that mirrors depth and color in 400 “pixels”. Enabled by the introduction of low-cost structured light sensors, we envisioned an instantaneous physical manifestation of the captured scan. Results Challenges included: selecting electronic parts, software architecture, hardware and networking performance, outsourcing of production, power consumption, and overall assembly and construction The final system was put to show on five exhibits to test audience engagement and robustness of the result. This work has implications towards design curricula and provides new focal points of attention for design research and prototyping. Conclusions Demonstration and prototypes are an increasingly important medium to disseminate design knowledge, because experience can only partly conveyed in written text or even in video. However, if products become dynamic, articulated and with behavior, the technology requirements for prototypes become more complex, and as a result harder to maintain. In this paper we shared our lessons learned.","prototyping; shape changing display; sesign process","en","journal article","Korean Society of Design Science","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:9f05eba0-6f66-438a-9abc-109dae23842a","http://resolver.tudelft.nl/uuid:9f05eba0-6f66-438a-9abc-109dae23842a","Smoothness-Increasing Accuracy-Conserving Filters for Discontinuous Galerkin Methods: Challenging the Assumptions of Symmetry and Uniformity","Li, X.","Vuik, C. (promotor)","2015","In this dissertation, we focus on exploiting superconvergence for discontinuous Galerkin methods and constructing a superconvergence extraction technique, in particular, Smoothness-Increasing Accuracy-Conserving (SIAC) filtering. The SIAC filtering technique is based on the superconvergence property of discontinuous Galerkin methods and aims to achieve a solution with higher accuracy order, reduced errors and improved smoothness. The main contributions described in this dissertation are: 1) an efficient one-sided SIAC filter for both uniform and nonuniform meshes; 2) one-sided derivative SIAC filters for nonuniform meshes; 3) the theoretical and computational foundation for using SIAC filters for nonuniform meshes; and 4) the application of SIAC filters for streamline integration. One-sided SIAC filtering is a technique that enhances the accuracy and smoothness of the DG solution near boundary regions. Previously introduced one-sided filters are not directly useful for most applications since they are limited to uniform meshes, linear equations, and the use of multi-precision packages in the computation. Also, the theoretical proofs relied on a periodic boundary assumption. We aim to overcome these deficiencies and develop a new fast one-sided filter for both uniform and nonuniform meshes. By studying B-splines and the negative order norm analysis, we generalized the structure of SIAC filters from a combination of central B-splines to using more general B-splines. Then, a ""boundary shape"" B-spline (using multiple knots at the boundary) was used to construct a new one-sided filter. We also presented the first theoretical proof of convergence for SIAC filtering over nonuniform meshes (smoothly-varying meshes). One purpose of SIAC filtering is to improve the smoothness of DG solutions. Because of the increased smoothness, we can obtain a better approximation for the derivatives of DG solutions. Derivative filtering over the interior region of uniform meshes was previously studied. However, nonuniform meshes and boundary regions remain a significant challenge. We extended the one-sided filter to a one-sided derivative filter. To deal with nonuniform meshes, we investigated the negative order norm over arbitrary meshes and proposed to scale the one-sided derivative filter with scaling hµ. For arbitrary nonuniform rectangular meshes, we proved that the one-sided derivative filter can enhance the order of convergence for the ?th derivative of the DG solution from k + 1 - ? to µ(2k + 2), where µ ? 2/3. The most challenging part of this project is recovering the superconvergence of the DG solution over nonuniform meshes through SIAC filtering. Typically, most theoretical proofs for SIAC filters are limited to uniform meshes (or translation invariant meshes). The only theoretical investigations for nonuniform meshes were included in our one-sided and derivative filtering studies. Although our earlier research for nonuniform meshes provides good engineering accuracy, we want to do better mathematically. This is not an easy task since unstructured meshes give DG solutions irregular performance under the negative order norm. In our work, we introduced a parameter to measure the unstructuredness of a given nonuniform mesh. Then, by adjusting the scaling of the SIAC filter based on this unstructuredness parameter, we can obtain the optimal filtered approximation (best accuracy) over a given nonuniform mesh. SIAC filtering for streamline integration is an attempt to use SIAC filters in a realistic engineering application. By using the one-sided filter and one-sided derivative filter, we designed an efficient algorithm: filtering the velocity field along the streamline and then use a backward differentiation formula for integration. Compared to the traditional method of filtering the entire field (multi-dimensional algorithm), the computational cost drops dramatically since its complexity corresponds to a one-dimensional algorithm. We finally note that most of the work presented originates from published and submitted papers for the past four years of this PhD research.","Discontinuous Galerkin method; post-processing; superconvergence; nonuniform meshes; SIAC filtering; boundaries","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","","",""
"uuid:8bf12bf7-e24c-4810-b217-bb1ff8355b80","http://resolver.tudelft.nl/uuid:8bf12bf7-e24c-4810-b217-bb1ff8355b80","Automatic generation of medium-detailed 3D models of buildings based on CAD data","Dominguez-Martin, B.; Van Oosterom, P.; Feito-Higueruela, F.R.; Garcia-Fernandez, A.L.; Ogayar-Anguita, C.J.","","2015","We present the preliminary results of a work in progress which aims to obtain a software system able to automatically generate a set of diverse 3D building models with a medium level of detail, that is, more detailed that a mere parallelepiped, but not as detailed as a complete geometric representation of the building. Each building model is automatically created from a CAD file containing the top, front and side views of the building.","3D building model; CAD data processing","en","conference paper","","","","","","","","","Architecture and The Built Environment","OTB","","","",""
"uuid:fbca8d52-dfc2-42cf-b64c-e2403d603285","http://resolver.tudelft.nl/uuid:fbca8d52-dfc2-42cf-b64c-e2403d603285","Trajectory driven multidisciplinary design optimization of a sub-orbital spaceplane using non-stationary Gaussian process","Dufour, R.; De Meulenaere, J.; Elham, A.","","2015","This paper presents the multidisciplinary optimization of an aircraft carried sub-orbital spaceplane. The optimization process focused on three disciplines: the aerodynamics, the structure and the trajectory. The optimization of the spaceplane geometry was coupled with the optimization of its trajectory. The structural weight was estimated using empirical formulas. The trajectory was optimized using a pseudo-spectral approach with an automated mesh refinement that allowed for increasing the sparsity of the Jacobian of the constraints. The aerodynamics of the spaceplane was computed using an Euler code and the results were used to create a surrogate model based on a non-stationary Gaussian process procedure that was specially developed for this study.","spaceplane multidisciplinary optimization; optimal control; surrogate modeling; Gaussian processes","en","journal article","Springer","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy & Propulsion","","","",""
"uuid:9cf82c49-3e69-45d2-aee8-78727f8064cc","http://resolver.tudelft.nl/uuid:9cf82c49-3e69-45d2-aee8-78727f8064cc","Simulation and detection of flaws in pre-cured CFRP using laser displacement sensing","Miesen, N.; Sinke, J.; Groves, R.M.; Benedictus, R.","","2015","The novelty of the research is the detection of different types of flaws in the prepreg carbon fibre-reinforced fibres (CFRP) layup compared to in cured products. This paper presents the development of a new method for in situ detection of prepreg CFRP production flaws combining laser displacement sensors and analytical modelling. Experimental results are used to validate the results from the models. The pre-cured flaws are simulated to determine the needed specifications of the measurement system. In static and dynamic experiments, the typical production flaws are detected to demonstrate the use of laser displacement sensing as a preventative non-destructive evaluation (NDE) system. During the production of CFRP materials, flaws can be introduced due to the process of layup or curing. Once a production flaw is embedded and cured in the CFRP laminate, the damage is irreversible and it is expensive to rework or remanufacture the product. Laser displacement sensing is currently used in a wide range of applications in industrial manufacturing and is successfully assessed in this research as a preventative NDE system.","laser displacement sensing; preventative NDE; CFRP; layup process","en","journal article","Springer","","","","","","","","Aerospace Engineering","Aerospace Structures & Materials","","","",""
"uuid:4e5a951b-941c-4f94-8e4e-047cd571c6b5","http://resolver.tudelft.nl/uuid:4e5a951b-941c-4f94-8e4e-047cd571c6b5","Resource recovery from organic waste streams by microbial enrichment cultures","Tamis, J.","Van Loosdrecht, M.C.M. (promotor); Kleerebezem, R. (promotor)","2015","Polyhydroxyalkanoate (PHA) is a natural product that can potentially replace a part of the chemicals and plastics derived from fossil sources. One of the main barriers for market entry of PHA is its relatively high price compared to conventional (fossil) feedstocks. This high price is related to current industrial production methods which are based on the cultivation of pure microbial cultures of a single species that a.o. has to be protected from contaminations from unwanted microorganisms that invade the systems from the surroundings. These production methods consequently have to rely on expensive substrates and pre-sterilized equipment. It was proposed that the costs of PHA production can be reduced significantly by replacing the existing industrial practices with open cultures that do not require sterile conditions and use organic waste streams as a feedstock. Open culture processes have a free exchange with the surroundings and therefore any organism present in nature can in principle enter these systems. To make an open process for PHA production feasible, a selective environment needs to be applied that enriches for species with high PHA accumulation capacity. PHA is produced by numerous microorganisms in natural ecosystems as a reserve compound to balance metabolic requirements during the absence of external energy and carbon sources. Based on this ecological role of PHA, selective environments can be designed that provide a competitive growth advantage to species with a superior PHA producing capacity. One approach for selective cultivation of PHA producing species is the feast-famine process, in which the substrate is dosed in short pulses followed by relatively long periods (hours) of absence of external substrate. This process is relatively well understood viz. controlled conditions at lab-scale e.g. the enrichment of PHA producing cultures dominated by the specialised genus Plasticicumulans (that can accumulate up to 0.9 gPHA gVSS-1) was reported for sequencing batch reactors that were operated under feast-famine conditions and at short solid retention time (i.e. 24 h) in a relatively long cycle (12 h) (Johnson et al. 2009; Jiang et al. 2011). The objective of this thesis was the development of processes for resource recovery from wastewater with microbial enrichment cultures and to evaluate the industrial relevance of waste based PHA production, with a focus on the upstream part of the product chain: the production of PHA rich biomass. To this end, we investigated several topics related to the the production of PHA from waste water using a three-step process: (1) pre-treatment to maximize the VFA concentrations, (2) enrichment of a microbial culture with high PHA storing capacity and (3) maximization of the PHA content in a fed-batch accumulation step. The first chapter contains a general introduction of the topic and an explanation of the relevance and scope of the research. In the second chapter, the pre-treatment of organic waste streams was investigated. The goal was to develop a process for efficient production of a VFA, the preferred substrate for PHA production. A granular sludge process that produces VFA at high rate, yield and purity while minimizing potential operational costs in an anaerobic sequencing batch reactor (ASBR) at low pH was developed using a model substrate (glucose). The inclusion of a short (2 minute) settling phase before effluent discharge enabled effective granulation and very high volumetric conversion rates of 150-300 kgCOD m-3 d-1. The product spectrum remained similar at the tested pH range with acetate and butyrate as the main products, and a total VFA yield of 60-70% on chemical oxygen demand (COD) basis. The requirement for base addition for pH regulation could be reduced from 1.1 to 0.6 mol OH- (mol glucose)-1 by lowering the pH from 5.5 to 4.5. Moreover, a virtually solid-free VFA stream could be achieved, which is advantageous to achieve high PHA contents in the accumulation step. Wastewater often contains a fraction of lipids, these are not easily converted to volatile fatty acids in a pre-fermentation step. In the third chapter of this thesis, the conversion of lipids in the feast-famine process was investigated. It was found that lipids do not contribute to PHA production in a standard feast-famine SBR. Instead, lipid-accumulating organisms were enriched. Further optimisation could potentially lead to a process for lipids recovery from wastewater, for instance for the production of biodiesel. A modelling approach was used to compare the experimental data from the pilot- and lab-scale experiments. There are many models for feast-famine processes found in literature, and the differences between the models used by different research groups hinders easy comparison experimental data. To enable better comparison of experimental results, a (concept) generalized model was developed in chapter four. Based on experimental data available in literature we have proposed model improvements for (1) modeling mixed substrates uptake, (2) growth in the feast phase, (3) switching between feast and famine phase, (4) PHA degradation and (5) modeling the accumulation phase. Finally, we provide an example of a simple uniform model. In chapter five the industrial relevance of waste-based production is investigated in a pilot experiment at an industrial location. The Mars candy bar factory in Veghel, The Netherlands, was selected because of its favourable waste water properties: high VFA and low nitrogen content. The pilot setup was according the earlier described three-step process: (1) fermentable COD was converted into mainly VFA in an anaerobic pre-treatment step resulting in an average VFA fraction of 0.64 gCOD gCOD-1; (2) selective enrichment in a 200 l SBR led to a microbial culture dominated by P. acidivorans; (3) the PHA content of the biomass was maximized in a fed-batch reactor resulting in an average PHA content of 0.7 gPHA gVSS-1. The dominant presence of P. acidivorans indicated that the selective pressure in the pilot experiment was similar to the lab. The difference in the PHA content achieved in pilot and lab (0.9 gPHA gVSS-1) could be explained by two main factors: the presence of non-VFA COD and solids in the wastewater . In chapter six an outlook for future development is provided. To replace existing chemical and polymer feedstocks with PHA, further optimization of the process is required. Amongst others minimization of acid and base consumption for pH control, production of a clean effluent water, and the recycling of effluent water will still significantly contribute to process efficiency. Nevertheless, in the perspective of these results, we believe the optimization of waste-based PHA production in conceptually not limited by the bioprocesses investigated in this thesis. Instead the most important bottleneck for successful market entry is the development of economic down-stream processing and product utilization routes that enable conversion of the PHA-containing sludge into a marketable product.","resource recovery; polyhydroxyalkanoates; microbial conversion; process development; open microbial cultures; pilot-scale","en","doctoral thesis","","","","","","","","","Applied Sciences","Biotechnology","","","",""
"uuid:1983b7c2-68cc-4a36-a219-11a16f9fb742","http://resolver.tudelft.nl/uuid:1983b7c2-68cc-4a36-a219-11a16f9fb742","A Facade refurbishment toolbox supporting energy upgrade of residential building skin","Konstantinou, T.","","2015","Over the next decade investments in buildings energy saving need to increase, together with the rate and depth of renovations, to achieve the required reduction in buildings related CO2 emissions. Although the need to improve residential buildings has been identified, guidelines come as general suggestion that fail to address the diversity of each project and give specific answers on how these requirements can be implemented in the design. During early design phases, architects are in search for a design direction to make informed decisions, particularly with regard to the building envelope, which mostly regulated energy demand. To result into a sustainable existing residential stock, this paper proposes a methodology to support refurbishment strategies design. The result or the proposed methodology enables designers to make informed decisions that generated energy and sustainability conscious designs, without dictating an optimal solution, from the energy point of view alone. Its applicability is validated through interviews with refurbishment stakeholders.","refurbishment; residential energy upgrade; design process","en","conference paper","Verlag der Technischen Universitat Graz","","","","","","","","Architecture and The Built Environment","Architectural Engineering +Technology","","","",""
"uuid:e18b5d06-ae5b-4cfd-89cc-5885967662c3","http://resolver.tudelft.nl/uuid:e18b5d06-ae5b-4cfd-89cc-5885967662c3","A Fourier Cosine Method for an Efficient Computation of Solutions to BSDEs","Ruijter, M.J.; Oosterlee, C.W.","","2015","We develop a Fourier method to solve backward stochastic differential equations (BSDEs). A general theta-discretization of the time-integrands leads to an induction scheme with conditional expectations. These are approximated by using Fourier cosine series expansions, relying on the availability of a characteristic function. The method is applied to BSDEs with jumps. Numerical experiments demonstrate the applicability of BSDEs in financial and economic problems and show fast convergence of our efficient probabilistic numerical method.","backward stochastic differential equations; Fourier cosine expansion method; European options; market imperfections; jump-diffusion process; utility indifference pricing","en","journal article","Society for Industrial and Applied Mathematics (SIAM)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:22cfcc0b-9aaa-4745-b3d1-9d267922a942","http://resolver.tudelft.nl/uuid:22cfcc0b-9aaa-4745-b3d1-9d267922a942","Multi-criteria university selection: Formulation and implementation using a fuzzy AHP","Salimi, N.; Rezaei, J.","","2015","Collaboration with universities as ‘knowledge factories’ is increasingly perceived to be an effective and viable solution for firms to gain competitive advantage. One of the main challenges firms face in this area is how to select the best university for collaboration. This selection undoubtedly affects some other strategic activities of firms, such as managing and governing the relationship with the selected university and, most importantly, firm performance. As such, the selection becomes an important strategic decision that deserves a great deal of attention. Thus far, no systematic attempt has been made to investigate this significant area of research. The main purpose of this study is to formulate a decision-making model for university selection. Reviewing existing literature of university-industry relationship yields a list of relevant criteria for this problem. The problem is then formulated as a multi-criteria decision-making (MCDM) model, and a fuzzy AHP is used to provide the solution. To illustrate the model, three Dutch universities are ranked based on the importance of the selected criteria.","University-industry relationship; university selection; multi-criteria decision-making (MCDM); analytic hierarchy process (AHP); fuzzy analytic hierarchy process (FAHP)","en","journal article","Springer","","","","","","","","Technology, Policy and Management","Engineering, Systems and Services","","","",""
"uuid:733536e7-c97f-40d3-aa08-ef353537d946","http://resolver.tudelft.nl/uuid:733536e7-c97f-40d3-aa08-ef353537d946","Challenges in Delivering Green Building Projects: Unearthing the Transaction Costs (TCs)","Qian, K.; Chan, E.H.W.; Khalid, A.G.","","2015","Delivering green building (GB) projects involve some activities that are atypical in comparison with conventional buildings. Such new activities are characterized by uncertainty, and they incur hidden costs that have not been expected nor are they readily appreciated among the stakeholders. This paper develops a typology and chronology to examine the new activities that are associated with transaction costs (TCs) in the real estate development process (REDP) of green building. Through in-depth interviews with representatives from the major developers in Hong Kong who have experiences in GB practice, this study aims to unearth TCs involved at the critical stages of the REDP. Apart from reconfirming the early project planning stage as the most critical in the consideration of TCs, the study results also identified “extra legal liability risk of the GB product” as the major concern for any GB developer in Hong Kong. The key additional activities that bring significant TCs in developing GB are identified and compared to their traditional counterparts. In turn, project managers not only have to pursue overall cost management whilst winning more business, but they also have to pay particular attention to sustainability in order to minimize hidden societal costs. The study also provides a reference for governments and professionals that will aid in forming policy as well as advance the practice of the GB market by optimizing the societal costs.","green building (GB); transaction costs (TCs); uncertainty; real estate development process (REDP); government policy; OA-Fund TU Delft","en","journal article","MDPI","","","","","","","","Architecture and The Built Environment","OTB","","","",""
"uuid:9189a22b-5528-4561-9d52-5e1664082a12","http://resolver.tudelft.nl/uuid:9189a22b-5528-4561-9d52-5e1664082a12","Distributed Graph Filters","Loukas, A.","Langendoen, K.G. (promotor)","2015","We have recently seen a surge of research focusing on the processing of graph data. The emerging field of signal processing on graphs focuses on the extension of classical discrete signal processing techniques to the graph setting. Arguably, the greatest breakthrough of the field has been the extension of the Fourier transform from time signals and images to graph signals, i. e., signals defined on the nodes of irregular graphs. Analogously to how the Fourier transform allows us to decompose complex signals in terms of their fundamental frequencies, the spectral transform describes signals in terms of their relation to the underlying graph. The rigorous examination of the relation between signal and graph has lead to the design of distributed graph filters, graph analogues of classical filters. Graph filters enable us to observe graph data at different scales, effectively separating fine details from inherent signal trends. For instance, a low-pass graph filter controls the size of observable signal structures, attenuating structures of small size, such as those attributed to noise. Beyond noise removal, graph filters are useful for revealing communities (low-pass), identifying event-regions (band-pass), and detecting anomalies (high-pass). Yet, despite their interesting properties, current distributed graph filters have so far been limited. To begin with, it is currently assumed that all data remain static for the duration of computation. When the signal is time-varying and the graph topology dynamic, the computation becomes challenging. Even further, filtering efficiency depends on the correct choice of scale—roughly the number of hops a filter takes into account. To choose the scale correctly however, one must have a-priori information about the observed phenomenon, as well as of the instrument of observation—in our case, the graph topology; information which is rarely available and often changes over time. The main contribution of this thesis is tackling the above limitations. First, we relax the computational assumptions posed by current graph filters. We propose distributed graph filters that converge fast, even in the presence of dynamics. Our filters are shown robust to message loss, and able to track time-varying signals and graphs. Second, we set the foundations of distributed scale-invariant analysis of graph signals. According to classical scale-space theory, if no a-priori information about a signal is known, one must observe it at all possible scales. In an analogous way, we show that the scale-invariant observation of a graph signal entails filtering it with a small family of graph filters. Scale-space analysis is therefore possible on graphs, and incurs an overhead equivalent to that of a distributed graph filter. We demonstrate the usefulness of our algorithms by applying them to a number of important information processing problems in sensor networks. Among others, our filters are shown to expand the scope of potential-field search methods, to enhance the detection accuracy of spatial event regions and boundaries, and to improve the identification of signal peaks and pits. Simulations and experiments, demonstrate that our algorithms are robust to the difficult conditions posed by wireless communications (such as asymmetric links, phantom effects, message loss, and asynchrony), and that they scale to very large networks.","signal processing on graphs; graph theory; distributed algorithms; sensor networks; graph filters; graph Fourier transform","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software and Computer Technology","","","",""
"uuid:e8972031-94a7-4fd2-bdd0-5defcce3010b","http://resolver.tudelft.nl/uuid:e8972031-94a7-4fd2-bdd0-5defcce3010b","Precursor-Less Coating of Nanoparticles in the Gas Phase","Pfeiffer, T.V.; Kedia, P.; Messing, M.E.; Valvo, M.; Schmidt-Ott, A.","","2015","This article introduces a continuous, gas-phase method for depositing thin metallic coatings onto (nano)particles using a type of physical vapor deposition (PVD) at ambient pressure and temperature. An aerosol of core particles is mixed with a metal vapor cloud formed by spark ablation by passing the aerosol through the spark zone using a hollow electrode configuration. The mixing process rapidly quenches the vapor, which condenses onto the core particles at a timescale of several tens of milliseconds in a manner that can be modeled as bimodal coagulation. Gold was deposited onto core nanoparticles consisting of silver or polystyrene latex, and silver was deposited onto gold nanoparticles. The coating morphology depends on the relative surface energies of the core and coating materials, similar to the growth mechanisms known for thin films: a coating made of a substance having a high surface energy typically results in a patchy coverage, while a coating material with a low surface energy will normally “wet” the surface of a core particle. The coated particles remain gas-borne, allowing further processing.","spark ablation; nanoparticles; coating; gas phase; continuous process; OA-Fund TU Delft","en","journal article","MDPI","","","","","","","","Applied Sciences","ChemE/Chemical Engineering","","","",""
"uuid:88ec5b06-9e4f-43c2-9b28-c92128e97a96","http://resolver.tudelft.nl/uuid:88ec5b06-9e4f-43c2-9b28-c92128e97a96","Non-Implementation of road pricing policy in the Netherlands: An application of the ""advocacy coalition framework""","Ardic, O.; Annema, J.A.; van Wee, G.P.","","2015","The implementation of road pricing policies is dependent on political support for the policy. It is frequently argued that many pricing proposals fail to be implemented due to the opposition of one or a group of policy actors (e.g. political parties, interest groups). This study considers this issue and examines the reasons for non-implementation of proposals for Dutch road pricing policies by analysing the policy position changes of 26 major policy actors and the changes in consensus and conflict among these actors over a policy process of 16 years. The “Advocacy Coalition Framework” (ACF) is used as the theoretical lens. Our findings show that in the Netherlands non-implementation cannot be ascribed to only the opposition of one policy actor or to one group of policy actors, but rather to features of the Dutch political system/culture and complications peculiar to the road pricing subsystem (socio-cultural values related to mobility, complex design issues). We found that internal and external shocks, and policy-oriented learning affected the subsystem and alerted the power balance between pro-and anti-road pricing coalitions. However, these factors did not produce a major policy change, namely, the introduction of a road pricing scheme.","advocacy coalition framework; policy actor; policy process; road pricing","en","journal article","Delft University of Technology, Transport and Logistics Group","","","","","","","","Technology, Policy and Management","Engineering Systems and Services","","","",""
"uuid:0558536c-267c-4d9d-a4b1-003a708ad0b7","http://resolver.tudelft.nl/uuid:0558536c-267c-4d9d-a4b1-003a708ad0b7","Business Process Quality Computation: Computing Non-Functional Requirements to Improve Business Processes","Heidari, F.","Brazier, F.M.T. (promotor)","2015","Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis addresses the question of how to specify and compute the quality of a business process, given the model that stakeholders use. The motivation for this thesis is the increasing importance of the quality of business processes. Knowing the quality of specific business processes enables stakeholders to judge if these processes need improvement. Knowing the quality of the constructs of those processes (viz., events, inputs, activities, and outputs) and the way they are structured enables a more detailed analysis of their shortcomings and provides a basis for the design of improvements. The research challenge of this thesis is grounded in the assumption that: “Organisations need an appropriate means to effectively compute achievement of their goals and objectives by their business processes.” Given this challenge, the main research question on which this thesis focuses is: “Can the quality of a business process be computed quantitatively at different levels of granularity?” The research objective is: “To develop frameworks, factors, and metrics for computing non-functional requirements (quality) of business processes quantitatively at different levels of granularity.” The outcomes of this thesis are: 1) BPIMM, a language-independent business process integrating meta-model, based on the concepts of seven mainstream business process modelling languages: BPMN, EPC, RAD, UML AD, SADT, IDEF0, and IDEF3. 2) BPC-QC (Business Process Concept - Quality Computation), an approach to quality computation at the lowest level of granularity of a business process. The approach consists of: i. BPC-QEF (Business Process Concept - Quality Evaluation Framework), a language-independent generic framework and algorithm to compute the quality of the constructs of a business process: event, input, activity, and output. ii. A set of business process quality dimensions and factors. The following quality dimensions are distinguished: performance, efficiency, reliability, recoverability, permissibility, and availability. Each dimension categorises different quality aspects in terms of factors. A non-exhaustive set of sixteen quantitative factors is provided. iii. Quality metrics for each of the quality factors, to facilitate a quantitative computation of the quality of a specific construct of a business process. 3) BP-QC (Business Process - Quality Computation), an approach to compute the quality at the highest level of granularity of a business process. The approach consists of: i. BP-CQCF (Business Process - Compositional Quality Computation Framework), a language-independent generic framework and algorithm to compute the quality of a business process as a whole, given the quality of its constructs. ii. A set of generic business process modelling patterns to decompose a business process into more succinct parts, namely: sequential, parallel with synchronisation, exclusive, inclusive, simple loop, and complex loop. iii. A set of over one hundred computational formulae. For each combination of modelling pattern and a quality factor, there is a formula to compute the quality. 4) AAV (Approach to Application and Validation), an evaluation plan to evaluate BPIMM, BPC-QC and BP-QC in practice, together with expert stakeholders. The plan consists of the units of measure, a measurement model, and a case study procedure. To evaluate the applicability of the contributions of this thesis to real world business needs, four case studies have been conducted in different environments: a Dutch educational institution, a global financial institution, an international financial service provider, and a Dutch research project on crisis management. Each of these case studies concerns a different, single business process. This thesis shows that: 1) A quality computation approach can be adopted independent of a business process modelling language. 2) Quantitative quality factors can be introduced specifically for the constructs of a business process. 3) Quantitative metrics and computational formulae can be developed for specific quality factors, allowing the computation of different aspects of the quality of a business process quantitatively at different levels of granularity. 4) An evaluation plan can be developed to evaluate the applicability of the contributions of this thesis (viz., BPIMM, BPC-QC, and BP-QC). The contributions of this thesis are designed to be beneficial to the areas of business and management, requirements engineering, software engineering, and business process modelling. In the areas of requirements engineering and software engineering, these contributions are intended to help practitioners to consider non-functional requirements at the earliest stage. In the area of business process modelling, information systems, service computing, and cloud computing, the contributions can be used for quality-driven modelling, design, and redesign. To conclude, knowing the quality value of a business process at different levels of granularity provides a basis for its improvement.","Business process; Quality; Quality computation; Business Process Modelling; Business Process Management; Quality estimation; Quality measurement","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:b20732c8-7d1b-48bc-969b-f51380bc0ec8","http://resolver.tudelft.nl/uuid:b20732c8-7d1b-48bc-969b-f51380bc0ec8","Liquid-Si Technology for High-Speed Circuits on Flexible Substrates","Zhang, J.","Beenakker, C.I.M. (promotor)","2015","Recently, flexible, wearable and disposable electronics have attracted a lot of attention. Printing enables low-cost fabrication of circuits on flexible substrates. Printed organic and metal oxide thin-film transistors (TFTs) have been researched intensively due to the ease of solution-processing. But their carrier mobility and reliability are inferior to conventional CMOS transistors fabricated with crystalline Si. Printed Si TFTs have also been reported, including amorphous Si and poly-crystalline Si TFTs. Both techniques are based on a precursor of liquid-Si solution. The high temperature required for forming Si film and the low mobility due to randomly positioned grain boundaries inside the channel region are limitations for fabricating high-speed circuits on flexible substrates. In this thesis single-grain Si TFTs with high performance produced at a low temperature (< 350 °C) from a printed liquid-Si solution on a flexible substrate is presented. Applications may include display drivers, flexible memories, printed RFID tags and other high-speed circuits on flexible substrates. Liquid Si is the mixture of a cyclopentasilane (CPS) monomer, UV-polymerized CPS and solvent. It can be spin coated on top of a substrate. Under thermal treatment, the solvent is evaporated, Si-H bonds are broken, and an amorphous Si film is formed. After the film is thermally annealed at 650 °C for dehydrogenation, it is crystallized by a XeCl excimer laser (308 nm) to make location-controlled single grains, using the ?-Czochralski crystallization method. Top-gated Si TFTs are fabricated with the channel inside a grain, and self-alignment source/drain doping by ion implantation is employed in the process. In Chapter 3, the fabrication process is discussed in detail. Due to the absence of grain boundaries in the channel region, the TFTs show carrier mobilities of 423 cm2/Vs for electrons and 118 cm2/Vs for holes, which are higher than those of organic-, metal oxide-, a-Si- or poly-Si TFTs. NMOS TFTs show stable behavior under gate and drain stress, and negligible hysteresis effect. On the other hand, PMOS TFTs show trap generation and carrier injection from the gate. To meet the temperature requirements for fabrication on flexible substrates, a low-temperature (<350 °C) process is demonstrated in Chapter 4. With doctor blade coating of pure CPS monomers, curing using UV light, annealing at 350 °C and dehydrogenating by excimer laser at room temperature, an amorphous film with low hydrogen concentration can be formed on top of a polyimide substrate without damaging the substrate. Single-grain Si TFTs are fabricated using a low-temperature a-Si film, and the carrier mobility is 460 cm2/Vs for electrons and 121 cm2/Vs for holes. This is the first time that single-grain Si TFTs are fabricated on top of a flexible substrate. By etching away the polyimide substrate, the devices are released from the supporting Si wafer, and are then transferred onto a 125 ?m-thick PEN foil, becoming flexible. The bending diameter, which is the diameter one can bend until device destruction, is as low as 6 mm. An improved substrate transfer process is investigated in Chapter 5. By placing the devices between two layers of 10-?m-thick polyimide, the devices could be bent to a diameter of 3 mm. They survive 140 bending-releasing cycles at 3 mm. Theoretically they function after many more cycles. SiO2, as the most important dielectric in the semiconductor industry, is also investigated for low-temperature fabrication from the same liquid-Si solution. SiO2 is fabricated at 350 °C, using a doctor-blade coating method and oxidation of the incompletely thermally annealed a-Si in oxygen plasma. As shown in Chapter 6, the atomic ratio O/Si of the resulting oxide film is 1.66, and the breakdown electric field strength is 1.1 MV/cm. Besides being a dielectric layer, the Si-rich SiO2 film can be crystallized by an excimer laser to form nanocrystalline Si dots for flash memory applications. This thesis deals with liquid-Si technology for high-speed circuits on flexible substrates. The work focuses on flexible single-grain Si TFTs and low-temperature silicon oxide. Upon satisfactory performance of the resulting devices, future work could be done on new processes for lower-temperature fabrications, new substrate transfer methods for more flexible devices and new circuit designs for complex digital or analog circuits.","flexible devices; solution-processed liquid-Si; thin-film transistors; single-grain transistors; laser crystallization","en","doctoral thesis","","","","","","","","2017-01-26","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","",""
"uuid:d707f257-307a-4fd0-9ee6-160f507a42a8","http://resolver.tudelft.nl/uuid:d707f257-307a-4fd0-9ee6-160f507a42a8","Simulation-integrated Design of Dry Bulk Terminals","Van Vianen, T.A.","Lodewijks, G. (promotor)","2015","To meet the expected increase of seaborne trade flows for coal and iron ore dry bulk terminals need to be designed or expanded. A comprehensive design method for dry bulk terminals is missing. Designs are currently based on rules-of-thumb, practical experiences and average values for specific design criteria. In this research, additions to existing design methods are formulated and dedicated simulation models are developed to support the design process. These models include stochastic processes and operational procedures that occur during daily operations. Stochastic variations take need to be taken into account to realize accurate terminal designs are the variations in ship arrival times and shiploads, the time that material is stored at stockyards and equipment disturbances. Modeling the entire terminal operation is complicated due to dependencies between the different terminal tasks. That’s why the terminal is decomposed in subsystems. Each subsystem is analyzed individually and per subsystem simulation models are developed. At the end, the simulation models are merged into a single terminal model. The simulation models proved to be successful in the formulation and assessment of (re)designs.","discrete-event simulation; dry bulk terminals; ship arrival process; stockyard operation; dry bulk terminals; queuing theory","en","doctoral thesis","TRAIL Research School","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","","",""
"uuid:bfdb2f67-fb3f-4e52-b557-031aba1401be","http://resolver.tudelft.nl/uuid:bfdb2f67-fb3f-4e52-b557-031aba1401be","Multi-dimensional digital human models for ergonomic analysis based on natural data representations","Moes, C.C.M.","","2015","Digital human models are often used for ergonomic analysis of product designs, before physical prototypes are available. However, existing digital human models cannot be used to simultaneously: 1) consider the tissue loads and the physiological effects of the tissue loads; 2) optimise the product properties. This paper develops multi-dimensional digital human models for ergonomic analysis based on natural data representations, which include anatomy, morphology, behaviour, physiology, tissue, and posture data representations. The results show that the multi-dimensional digital human models can be used to: 1) accelerate the design process; 2) assess mechanical and physiological loads inside the body and in the contact area between the body and the product; 3) optimise the quality of the product; 4) reduce the number of user trials needed to create the product.","human modelling; ergonomics; product design; design process","en","journal article","Inderscience","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:fd1edad3-babd-4663-b16e-3b166d3cc674","http://resolver.tudelft.nl/uuid:fd1edad3-babd-4663-b16e-3b166d3cc674","THE INFLUENCE OF ADHESION ON CUTTING PROCESSES IN DREDGING","Miedema, S.A.","Miedema, S.A. (advisor)","2015","","dredging; clamshell bucket; cutting processes; clay; clay cutting; clay properties; adhesion; internal friction angle","","conference paper","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Offshore and Dredging Engineering","","",""
"uuid:847b29f5-c20e-4c06-9b63-c55e1835ddbd","http://resolver.tudelft.nl/uuid:847b29f5-c20e-4c06-9b63-c55e1835ddbd","Beamforming in sparse, random, 3D array antennas with fluctuating element locations","Bentum, Mark J (University of Twente); Lager, I.E. (TU Delft Electrical Engineering Education); Bosma, S. (TU Delft Tera-Hertz Sensing); Bruinsma, W.P. (TU Delft Tera-Hertz Sensing); Hes, R.P. (TU Delft Computer Engineering)","","2015","The impact of the fluctuations in the locations of elementary radiators on the radiation properties of three-dimensional (3D) array antennas is studied. The principal radiation features (sidelobes level, beam squint) are examined based on illustrative examples. Some atypical behaviours, that are specific to 3D arrays, are highlighted. The effect of fluctuations is demonstrated via examples concerning non-uniform arrays. This study is important for designing beamforming strategies in case of constellations of (nano) satellites for space-bound remote sensing of the Earth and the Universe.","Arrays; Three-dimensional displays; Gratings; Antenna radiation patterns; Planar arrays; Array signal processing","en","conference paper","IEEE","","","","","","","","","","Electrical Engineering Education","","",""
"uuid:15003eaf-6b97-4a0b-99ad-ed7ecb8380e6","http://resolver.tudelft.nl/uuid:15003eaf-6b97-4a0b-99ad-ed7ecb8380e6","BALANCE 4P - Balancing decisions for urban brownfield redevelopment: Technical report of the BALANCE 4P project of the SNOWMAN Network coordinated call IV","Norrman, J. (Chalmers University of Technology); Volchko, Y. (Chalmers University of Technology); Maring, L (Deltares); Hooimeijer, F.L. (TU Delft Environmental Technology and Design); Broekx, S. (Flemish Institute for Technological Research); Garcao, R. (Chalmers University of Technology); Beames, A. (Flemish Institute for Technological Research); Kain, J.H. (Chalmers University of Technology); Ivarsson, M. (Enveco); Touchant, K. (Flemish Institute for Technological Research)","","2015","Land take as a result of urbanization is one of the major soil threats in Europe. One of the key measures to prevent further urban sprawl and additional land take, is redevelopment of urban brownfields: underused urban areas with, in many cases, soil and groundwater pollution. The latter issue can be a bottleneck for redevelopment of brownfields instead of green fields. A difficulty for brownfield redevelopments is that in urban projects the responsibilities, tools and knowledge of subsurface engineering and urban planning and design are not integrated; they depend heavily on each other but work in sectors. The urban designer usually deals with opportunities for socio-economic benefits while the subsoil engineer deals with the technical challenges of the site. Balance 4P suggests a holistic approach to brownfield redevelopment that (i) recognizes all phases of the urban redevelopment process which are influenced by the planning conditions set by laws, regulations, policy and institutions; (ii) acknowledges multiple subsurface qualities in the brownfield redevelopment project; (iii) promotes knowledge exchange between the surface and the subsurface sectors, across disciplines within each sector, and over time, about the subsurface qualities of the specific project; (iv) focus on the urban redevelopment project by identifying strategies for redevelopment that can fulfil a good quality of the built environment; (v) assesses the three P’s (People, Planet, Profit/Prosperity) in each urban redevelopment phase; and (vi) puts the Process in focus rather than specific instruments by focusing on identification of WHO should be involved in the knowledge exchange process and HOW it can be mediated. The developed decision support framework is aimed to guide project teams willing to implement a more holistic approach in practice. The framework includes four steps carried out in iterative manner: (1) stakeholder analysis, (2) generation of redevelopment alternatives, (3) sustainability assessment of the alternatives, and (4) synthesis of the assessment results, including uncertainty analysis. The guidance describing the steps in the decision support framework and activities within each step can help to structure the decision process and provide support to project teams. The anticipated advantages of the holistic approach are redevelopment plans that allow for smart, cost-effective and sustainable solutions in the implementation process by making explicit use of subsurface information and knowledge in the planning process, and possibilities for more long-term sustainable planning with regard to the subsurface by increased awareness of the subsurface as a resource and the associated risks and possibilities.","brownfield; contaminated site; redevelopment; remediation; planning; sustainability assessment; holistic approach; decision process","en","report","Chalmers University of Technology","","","","","","","","","","Environmental Technology and Design","","",""
"uuid:57e2bef2-18bc-4014-8e34-c10d66c6a40a","http://resolver.tudelft.nl/uuid:57e2bef2-18bc-4014-8e34-c10d66c6a40a","An Application of the IPROD Software Framework to Support the Product Development Process in the Automotive and Aerospace Domain","Hoogreef, M.F.M.; Van Dijk, R.E.C.; La Rocca, G.; d'Ippolito, R.","","2014","The Product Development Process (PDP) of manufacturing companies requires the efficient management of huge amounts of data from different sources and their integration in the sub-processes that compose the product development chain. This is a very challenging endeavor for which an integrated approach does not yet exist. The EC FP7 Project iProd aims at filling this void, by developing a flexible and service-oriented software framework, supported by a knowledge base that is structured by means of ontologies, to improve the efficiency and the quality of the PDP in the preliminary design phase. This paper discusses the first prototype of this software framework. The logic, overall software architecture and some of the implementations details of the framework are described. The functionalities of the software framework are demonstrated by means of two use-cases from two different domains, i.e. the automotive domain, represented by the development process of a car door, and the aerospace domain, represented by the development process of a rudder for a business jet. Preliminary testing, using the first prototype, indicates that the application of the framework to the two use-cases can yield benefits in terms of a reduction in development time in the preliminary design phase and results in product quality improvements, by having additional time for more design iterations to increase the maturity level during this phase. However, the framework is to be improved in terms of reliability, efficiency and maintainability. These improvements will be done during the development of two more software prototypes.","capturing product information; knowledge engineering; knowledge management; industrial workflow enhancement; ontologies; knowledge-based technologies to support the product development process","en","conference paper","TMCE","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy & Propulsion","","","",""
"uuid:9ba846a7-adf6-4a54-826a-e5de0ddac68b","http://resolver.tudelft.nl/uuid:9ba846a7-adf6-4a54-826a-e5de0ddac68b","Urban and regional design: Making the design process explicit","Van Dooren, E.J.G.C.; Willekens, L.A.M.","","2014","Urban and regional design are fundamental skills in the field of urban studies. Designing is a complex, personal, creative and open-ended skill. Performing a well-developed skill is mainly an implicit activity. In teaching, however, it is essential to make explicit what to do. Learning a complex skill like designing, is a matter of doing and becoming aware what should be done and how to do it. Therefore it will be helpful for teachers and students to make the steps, methods and/or activities in the design process explicit. This paper distinguishes five generic elements in the urban and regional design process. These elements are based on the review of academic literature about the design process, on structured observations of design teaching, and based on personal experiences in design teaching. These elements are generic in the sense that they lay beyond the complex, personal, creative and open-endedness of the design skill: (1) exploring and deciding, or experimenting, (2) guiding theme or intended qualities, (3) domains or aspects, (4) frame of reference or library, (5) urbanism language: text and image","design process; urban and regional design; design education; urbanism","en","conference paper","AESOP","","","","","","","","Architecture and The Built Environment","Architectural Engineering and Technology","","","",""
"uuid:3aebb1f4-93e7-43b3-8b33-58f7f6f8478d","http://resolver.tudelft.nl/uuid:3aebb1f4-93e7-43b3-8b33-58f7f6f8478d","From networks to hybrids: Strategic behaviour and crisis-driven change in the regulation and governance of the European financial and economic system,","Groenleer, M.L.P.; Mijs, A.; Ten Heuvelhof, E.F.; Meeuwen, B.; Van der Puil, J.","","2014","A key challenge that European decision-makers struggle with today is regulating and governing the European financial and economic system in a way that is both effective and legitimate. To help address this challenge, this paper asks why regulatory gaps occurred and European governance has been weak, and how these gaps and weaknesses allowed risky behaviour. It then scrutinizes the regulatory governance structures that have emerged in response, particularly at the EU level, to coordinate the financial and economic system. Two illustrative cases are examined: self- regulation by and national supervision of banks and ‘decentred’ fiscal policy coordination by eurozone countries. We point to strategic behaviour as a key driver of the crisis. We also argue that changes in regulatory governance to curb such behaviour have entailed introduction of some form of hierarchy at the supranational level, yet still combined with strong network characteristics, thus creating or strengthening hybridity in regulatory governance.","agencies; coordination and decision-making processes; financial and economic system; governance; hierarchies; hybrids; networks; (self-)regulation; strategic behaviour","en","journal article","The Hebrew University","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:acf1a97b-5bc1-40f6-8318-3658744659a8","http://resolver.tudelft.nl/uuid:acf1a97b-5bc1-40f6-8318-3658744659a8","Characterization of a heterogeneous landfill using seismic and electrical resistivity data","Konstantaki, L.A.; Ghose, R.; Draganov, D.S.; Diaferia, G.; Heimovaara, T.J.","","2014","Understanding the processes occurring inside a landfill is important for improving the treatment of landfills. Irrigation and recirculation of leachate are widely used in landfill treatments. Increasing the efficiency of such treatments requires a detailed understanding of the flow inside the landfill. The flow depends largely on the heterogeneous distribution of density. It is, therefore, of great practical interest to determine the density distribution affecting the flow paths inside a landfill. Studies in the past have characterized landfill sites but have not led to high-resolution, detailed quantitative results. We performed an S-wave reflection survey, multichannel analysis of surface waves (MASW), and electrical resistivity survey to investigate the possibility of delineating the heterogeneity distribution in the body of a landfill. We found that the high-resolution S-wave reflection method offers the desired resolution. However, in the case of a very heterogeneous landfill and a high noise level, the processing of high-resolution, shallow reflection data required special care. In comparison, MASW gave the general trend of the changes inside the landfill, whereas the electrical resistivity (ER) survey provides useful clues for interpretation of seismic reflection data. We found that it is possible to localize fine-scale heterogeneities in the landfill using the S-wave reflection method using a high-frequency vibratory source. Using empirical relations specific to landfill sites, we then estimated the density distribution inside the landfill, along with the associated uncertainty considering different methods. The final interpretation was guided by supplementary information provided by MASW and ER tomography.","near surface; shear wave (S-wave); processing; scattering; environmental","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:166e8200-4984-4114-a239-431ee850fb49","http://resolver.tudelft.nl/uuid:166e8200-4984-4114-a239-431ee850fb49","Mapping of regional transport RTD frameworks in Europe","Maras, V.; Radmilovic, Z.; Anoyrkati, E.; Maher, S.; Konings, J.W.; Hoppe, M.; Winter, M.; Condeco, A.; Christodoulou, A.; Mitrovic, S.","","2014","Transport is a key enabler of economic and social activity, but also the source of environmental concerns and other negative externalities. The efficiency of a transport system affects the costs and environmental impacts of the growing volumes of passengers and freight. According to the White Paper (2011), innovation is essential for the development of a European transport strategy in order to achieve the identified challenges. Therefore, this paper presents the results of a mapping process of regional research and innovation activities across the European transport sector. It is based on the intermediate results of FP7 project METRIC (“Mapping European Regional Transport Research and Innovation Capacities”). Particular attention has been given to the examination of prioritized objectives in R&I infrastructure in different EU countries, with specific emphasis on the area of regional transport research. The mapping process was based on efforts to collect significant amount of useful indicators and indexes relating to European regions (quantitative data), as well as relevant policies, initiatives, strategies, clusters, actors, etc (qualitative data). The sets of quantitative data are mainly taken from EUROSTAT and Cluster Observatory websites. Qualitative data was determined from numerous other sources, such as relevant web sites, reports, papers, etc. Lessons have been drawn from specific regional cases of transport research and innovation policy governance. In this respect, we elaborated and researched the state of regional research and innovation activities, policies and programmes and their most recent trends in European regions at NUTS2 level. Furthermore, we also detail the importance of various transport sectors for a selection of NUTS2 regions. The work undertaken also included an analysis on how the priorities of innovation and RTD strategies are formulated, determined what type of innovation is the focus of transport sector, and how this varies across European regions.","transport sector; research and innovation activities; mapping process; European NUTS2 regions; innovation priorities","en","conference paper","City Net Scientific Research Center","","","","","","","","Architecture and The Built Environment","OTB","","","",""
"uuid:035d130a-1359-4cdb-806f-7e72070df0ef","http://resolver.tudelft.nl/uuid:035d130a-1359-4cdb-806f-7e72070df0ef","The role of river flow and tidal asymmetry on 1-D estuarine morphodynamics","Guo, L.; Van der Wegen, M.; Roelvink, J.A.; He, Q.","","2014","Numerous research efforts have been devoted to understanding estuarine morphodynamics under tidal forcing. However, the impact of river discharge on estuarine morphodynamics is insufficiently examined. Inspired by the Yangtze Estuary, this work explores the morphodynamic impact of river discharge in a 560 km long tidal basin based on a 1-D model (Delft3D). The model considers total load sediment transport and employs a morphodynamic updating scheme to achieve long-term morphodynamic evolution. We analyze the role of Stokes drift, tidal asymmetry, and river discharge in generating tidal residual sediment transport. Model results suggest that morphodynamic equilibrium is approached within millennia by vanishing spatial gradients of tidal residual sediment transport. We find that the interaction between ebb-directed Stokes return flow/river flow with tides is an important mechanism that flushes river-supplied sediment seaward. Increasing river discharge does not induce continuously eroded or accreted equilibrium bed profiles because of the balance between riverine sediment supply and sediment flushing to the sea. An intermediate threshold river discharge can be defined which leads to a deepest equilibrium bed profile. As a result, the shape (concavity or convexity) of the equilibrium bed profiles will adapt with the magnitude of river discharge. Overall, this study reveals the significant role of river discharge in controlling estuarine morphodynamics by supplying sediment and reinforcing ebb-directed residual sediment transport.","estuarine morphodynamics; process-based modeling; residual sediment transport; equilibrium profiles","en","journal article","American Geophysical Union","","","","","","","2015-05-04","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:201d5145-0717-4dea-b0d0-c018e510fdaa","http://resolver.tudelft.nl/uuid:201d5145-0717-4dea-b0d0-c018e510fdaa","Formal Abstractions for Automated Verification and Synthesis of Stochastic Systems","Esmaeil Zadeh Soudjani, S.","Abate, A. (promotor); Hellendoorn, J. (promotor)","2014","Stochastic hybrid systems involve the coupling of discrete, continuous, and probabilistic phenomena, in which the composition of continuous and discrete variables captures the behavior of physical systems interacting with digital, computational devices. Because of their versatility and generality, methods for modeling, analysis, and verification of stochastic hybrid systems (SHS) have proved invaluable in a wide range of applications, including biology, smart grids, air traffic control, finance, and automotive systems. The problems of verification and of controller synthesis over SHS can be algorithmically studied using methodologies and tools developed in computer science, utilizing proper symbolic models describing the overall behaviors of the SHS. A promising direction to address formal verification and synthesis against complex logic specifications, such as PCTL and BLTL, is the use of abstraction with finitely many states. This thesis is devoted to formal abstractions for verification and synthesis of SHS by bridging the gap between stochastic analysis, computer science, and control engineering. A SHS is first considered as a discrete time Markov process over a general state space, then is abstracted as a finite-state Markov chain to be formally verified against the desired specification. We generate finite abstractions of general state-space Markov processes based on the partitioning of the state space, which provide a Markov chain as an approximation of the original process. We put forward a novel adaptive and sequential gridding algorithm based on non-uniform quantization of the state space that is expected to conform to the underlying dynamics of the model and thus to mitigate the curse of dimensionality unavoidably related to the partitioning procedure. PCTL and BLTL properties are defined over trajectories of a system. Examples of such properties are probabilistic safety and reach-avoid specifications. While the developed techniques are applicable to a wide arena of probabilistic properties, the thesis focuses on the study of the particular specification probabilistic safety or invariance, over a finite horizon. Abstraction of controlled discrete-time Markov processes to Markov decision processes over finite sets of states is also studied in the thesis. The proposed abstraction scheme enables us to solve the problem of obtaining a maximally safe Markov policy for the Markov decision process and synthesize a control policy for the original model. The total error is quantified which is due to the abstraction procedure and caused by exporting the result back to the original process. The abstraction error hinges on the regularity of the stochastic kernel of the process, i.e. its Lipschitz continuity. Furthermore, this thesis extends the results in the following directions: 1) Partially degenerate stochastic processes suffer from non-smooth probabilistic evolution of states. The stochastic kernel of such processes does not satisfy Lipschitz continuity assumptions which requires us to develop new techniques specialized for this class of processes. We have shown that the probabilistic invariance problem over such processes can be separated into two parts: a deterministic reachability analysis, and a probabilistic invariance problem that depends on the outcome of the first. This decomposition approach leads to computational improvements. 2) The abstraction approach have leveraged piece-wise constant interpolations of the stochastic kernel of the process. We extend this approach for systems with higher degrees of smoothness in their probabilistic evolution and provide approximation methods via higher-order interpolations that are aimed at requiring less computational effort. Using higher-order interpolations (versus piece-wise constant ones) can be beneficial in terms of obtaining tighter bounds on the approximation error. Furthermore, since the approximation procedures depend on the partitioning of the state space, higher-order schemes display an interesting tradeoff between more parsimonious representations versus more complex local computation. From the application point of view, an example of SHS is the model of thermostatically controlled loads (TCLs), which captures the evolution of temperature inside a building. This thesis proposes a new, formal two-step abstraction procedure to generate a finite stochastic dynamical model as the aggregation of the dynamics of a population of TCLs. The approach relaxes the limiting assumptions employed in the literature by providing a model based on the natural probabilistic evolution of the single TCL temperature. We also describe a dynamical model for the time evolution of the abstraction, and develop a set-point control strategy aimed at reference tracking over the total power consumption of the TCL population. The abstraction algorithms discussed in this thesis have been implemented as a MATLAB tool FAUST2 (abbreviation for “Formal Abstractions of Uncountable-STate STochastic processes”). The software is freely available for download at http://sourceforge.net/projects/faust2/.","Formal Abstractions; Automated Verification; Synthesis; Markov Process; Markov Chain; Stochastic Systems; Thermostatically Controlled Loads","en","doctoral thesis","","","","","","","","2015-11-03","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:c3d69714-9304-4522-8600-0440ad72b186","http://resolver.tudelft.nl/uuid:c3d69714-9304-4522-8600-0440ad72b186","Sulphate reducing bacteria in wastewater treatment","Van den Brand, T.P.H.","Van Loosdrecht, M.C.M. (promotor); Brdjanovic, D. (promotor)","2014","The depletion of fresh water sources forces to design innovative integral solutions for the urban water cycle. Usual practice in most cities is to use drinking water to transport waste outside the city via sewer system. For toilet flushing the water quality is less important and seawater could be used as alternative to use of drinking water. Due to high sulphate content in seawater it usage for toilet flushing will increase the sulphate content of wastewater. Sulphate enrichment of wastewater may also origin from industrial wastewater discharges, seawater intrusion in the sewer network or from sulphate presence in the groundwater used for water supply. Sulphate-rich wastewater allows for alternative wastewater treatment solutions such as the novel Sulphate reduction Autotrophic denitrification and Nitrification Integrated (SANI) process, a sulphur-cycle based wastewater treatment process (Wang et al. 2009). The SANI process is a novel treatment concept developed by the Hong Kong University of Science and Technology and TU Delft. Previously, autotrophic (sulphide-based) denitrification has been studied intensively (Shao et al. 2010), also at relatively low temperatures (<30°C) (Kleerebezem & Mendez 2002, Shao et al. 2011). Research on sulphate reduction however, was mainly performed for industrial purposes at higher temperatures (>30°C) (Dries et al. 1998, Vallero et al. 2004). Therefore, the focus of this research was on sulphate reduction processes at lower temperatures and in the context of municipal wastewater treatment. Sulphate reducing bacteria (SRB) play a key role in the sulphur-cycle based wastewater treatment process (e.g. SANI), however SRB can also play a role in a conventional wastewater treatment process. The application of SRB is beneficial due to minimal sludge production, significant coliforms removal by their exposure to in the process produced sulphide, its applicability for selective heavy metal removal, and their ability to form granular sludge. The primary objective of this research was to study the kinetics of SRB found in domestic wastewater treatment plants (WWTPs) in moderate climate (e.g. the Netherlands), and in the temperature range of 10-20°C. A literature review revealed that temperature, carbon source, sulphide toxicity, sulphate and salt level are important parameters to understand the occurrence and performance of SRB in treating domestic wastewater. Consequently, these parameters were studied to evaluate the applicability of SRB to domestic wastewater treatment. The present study comprises both analysis of kinetics of SRB (long and short term effects) and of microbial population (by application of Transcript Restriction Fragment Length Polymorphism (T-RFLP) and clone-sequencing) of SRB. The main conclusions of each chapter will be addressed below. Experimental methods Throughout this study different types of tests were performed, of which the main tests comprised long-term operation of reactors (months), execution of short-term batch experiments (6 hours) and microbiological population analyses. The reactors were operated for a long time at the following standard conditions: temperature of 20°C, hydraulic retention (HRT) of 10 hours, solid retention time (SRT) of 15 days, pH 7.6 and under non-aerated conditions (anaerobic). The influent of reactors contained: acetate and propionate as organic substrates (300 mgCODVFA/L), ammonium (100 mgN/L), phosphate (10 mgP/L), salinity (0.7%) and sulphate (500 mg/L). The effect of change of various operational parameters on SRB were compared to SRB in this standard reactor. In each reactor biomass was evaluated for the growth, conversion rates, morphology and microbial population present. Short-term experiments were conducted to study the separately effect of nutrients (nitrogen, phosphate, sulphide ions, different substrates, et cetera). Microbiological population analyses were performed by T-RFLP and clone sequencing to identify specific microorganisms present in the sludge. SRB in aerobic WWTP In chapter 3 the presence and activity of SRB in aerobic municipal WWTPs were studied, in order to investigate what type of SRB are naturally occurring in the conventional WWTPs. As SRB are strictly anaerobic microorganisms also the ability of SRB to cope with oxygen exposure was studied. Nine WWTPs were subject to sampling to compare the SRB population in the samples taken from biological tanks and influent by T-RFLP and sequencing techniques. The T-RFLP results revealed that the SRB populations were very similar in these nine WWTPs. Also the similarity between the activated sludge of the tanks and influent was high (>76%). Desulfobacter postgatei, Desulfobulbus propionicus, and Desulfovibrio intestinalis seems to be the most common detected SRB species among the nine selected WWTP in the Netherlands. Batch-activity tests (6 hours) using sludge from the WWTP, did not show any SRB activity. This indicated that likely the SRB where derived from the influent with hardly any enhanced cultivation occurring in the treatment plant itself. Furthermore, 2 long-term (>3 SRT) sequencing batch reactors were operated in presence and absence of oxygen, investigating whether SRB can be active in relatively low DO concentration. Firstly a sequencing batch reactor was operated in the absence of oxygen (anaerobically), resulting in a dominant SRB population, then the conditions were altered to low DO conditions achieved by oxygen transfer between the mixed liquor and the oxygen present in the headspace of the reactor. In both reactors biodegradable organic carbon was removed, partly based on SRB activity. Sulphate reducing activity was also obtained under aerobic condition due to the formation of granular sludge, a protective strategy of the bacteria protect against oxygen exposure. In conclusion, SRB are naturally occurring in conventional WWTP, however are not very active. SRB however, can be active under low DO conditions if growing into sludge granules or as biofilms. SRB at low temperatures The SANI process was developed in Hong Kong at relatively high sewage temperature (30°C). The question whether SRB, as part of the SANI-process, could also be applied successfully in moderate climates, was central in chapter 4. Since temperature is a key-parameter in many biological processes, its kinetic effect on SRB performance, as well as the effect on SRB population was studied. Two sequencing batch reactors were operated, for more than 3 SRTs under sufficiently stable conditions, at 10°C and 20°C, to simulate winter and summer conditions of moderate climate, respectively. The study revealed that at 20°C complete readily biodegradable organic substrate (volatile fatty acids: VFA) removal was achieved, while at 10°C only 2/3 of the CODVFA content was removed. A decrease in rate of approximately a factor 2, caused the incomplete CODVFA removal at 10°C. Despite acetate was the only substrate in the effluent, batch experiments indicated that the acetate and propionate consumption rate were equally affected by a temperature decrease from 20°C to 10°C. Increasing the HRT to 13.3 hours assured a complete CODVFA removal also for operations at 10°C. Microbial population analyses (T RFLP and sequencing) revealed that barely any alteration in SRB population occurred, as response on a temperature decrease from 20 to 10°C, in both laboratory reactors (chapter 4). Also in a full-scale WWTP the SRB population hardly altered due to temperature changes in the range of 10-20°C (chapter 3). Temperature in the range of 10-20°C seems therefore not favouring proliferation of other SRB species. The marginal effect of temperature on SRB population and the opportunity to prolong the HRT for temperatures of 10°C in order to achieve complete VFA COD removal, indicate that SRB may be applied in moderate climate successfully. As for normal wastewater treatment process design, the design should be based on the conversion at the lowest temperature. Acetate and propionate feeding The competition between SRB and methanogens is a point of concern for stable process design; both can convert the organic carbon in absence of oxygen or nitrate. Methanogens, however produce methane, which would not be easy to use in the subsequent denitrification step of the SANI process. Along many other factors, the organic substrate type is suggested to play a key-role for this competition. Hence, the effect of acetate, propionate and a mixture of both substrates on the proliferation and activity of both microbial groups were evaluated in chapter 5. Three sequencing batch reactors were operated to investigate the effect of these feed procedures. In the acetate fed reactor, methanogens became dominant, while in the propionate reactor SRB were the dominant population. In the mixed substrate fed reactor both substrates were fully converted by SRB. All operational characteristics such as the substrate consumption rate, yield and growth rate were similar for SRB from the propionate fed and mixed substrate-fed reactors. Nonetheless, low similarity (<35%) between the sludge from propionate- and mixed substrate-feed reactors was found. The SRB population adapted to propionate feeding, could easily switch and consume acetate in similar rates, which suggest that these species can consume both acetate and propionate. These results indicate that under municipal wastewater conditions (20°C) with fluctuations of acetate and propionate in the influent, the SRB are likely to outcompete methanogens more easily as inferred from pure substrate studies on acetate solely. Increased seawater level In general the saline black water (derived from toilet flushing) is mixed with the grey water (from fresh/drinking water usage) before treatment. As a result, it becomes harder to reuse the water than when grey-water would be treated separately. In order to treat these flows separately, the SANI process should be able to treat wastewater with a higher salt and sulphate content. The purpose of chapter 6 was to investigate the effect of higher salt and sulphate concentration on the performance of SRB. Also for industrial effluent with high salt and sulphate levels, these results are of interest. For that reason, three sequencing batch reactors were operated in which three different seawater portions in the sewage were applied. The three feed portions were 20, 60 and 100% seawater, referring to respectively 0.7, 2.1 and 3.5% salinity and 500, 1,500 and 2,500 mg/L sulphate. In the reactors operated with 20 and 60% seawater portion the same dominant SRB species was present, while the SRB population shifted in the reactor with a 100% seawater portion. The biodegradable organic carbon was in each reactor fully converted. The biomass-specific biological sulphate reduction rate decreased significantly (~45%) when salinity increased from 0.7 to 3.5%. Still, complete acetate and propionate removal occurred even for the 100% seawater feed. In conclusion, the performance of SRB and the effluent quality of SANI process should not be affected when the original SANI process is fed with black water. Effect of nutrients and salinity When saline black water would be separately treated, increased nutrient concentrations can be expected in the influent. For instance, the nitrogen, phosphate, COD (acetate and propionate), salt composition, salt en sulphate level can vary drastically in the influent as a result of separate black water treatment. Further, variation of these nutrients as result of regular influent fluctuation can be expected. The main goal in chapter 7 was to study the effect of the selected nutrients and ions on the sulphate reduction rate, as measure of the SRB performance. For this purpose batch tests, (6 hours) with varying feed composition were conducted. The sludge from the reactors used in chapter 6, was used as inoculum. The average value of each nutrient and ion was applied, as well as a 10 fold higher concentration. These batch tests revealed that increased level of propionate (4,000 mgCODVFA/L) decreased the sulphate reduction rate, while solely acetate and mixture of acetate and propionate (with equal CODVFA concentrations) did not decrease the sulphate reduction rate. Higher concentrations of ammonium and phosphate in the influent did not lead to a change in sulphate reduction rate. Nitrate however, became inhibitory to SRB at levels higher than 500 mgN/L, due to the formation of nitrite (<10 mgN/L). Batch test with separate sulphate or NaCl (salinity) increase in concentration demonstrated that the inhibition of increased salinity (3.5% salinity and 2,500 mgSO42-/L) of the sewage was mainly caused by the increase of salinity. Furthermore also the salt composition had effect on the sulphate reduction rates; the ions Na+ and K+ affected more severely the sulphate reduction rate than Mg2+. In short, assuming the minor effects of CODVFA, N and P in ranges typical for domestic wastewater, and the adaptation opportunities to higher salinity, SRB can be applied successfully for the treatment of saline black water and under usual fluctuation of nutrients concentrations in the influent. Toxicity of sulphide Results of chapter 4 and 6 showed that the sulphate reduction rate within the sequencing batch cycle operation declined, suggesting the toxicity of sulphide on SRB or a limitation due to depletion of volatile fatty acids (CODVFA). In chapter 8 the cause of the sulphate reduction rate decline was studied, as well as the toxicity effect of sulphide and the ability of SRB to adapt to higher sulphide concentration. Batch tests (6 hours) with different initial sulphide levels demonstrated that sulphide presence decreased the sulphate reduction rate substantially. Batch tests with increased CODVFA content neither resulted in higher rates, nor in more sulphide production. Therefore, the decline in rate over time within the reactor was rather caused by sulphide toxicity than CODVFA depletion. The sulphide toxicity affected acetate and propionate consumption equally. To study the adaptability of SRB to higher sulphide levels, two long-term sequencing batch reactors (>3 SRTs) were operated. The reactors were fed with 400 or 800 mgCODVFA/L, resulting in respectively 200 or 400 mg/L sulphide. After changing the feed composition from 400 to 800 mgCODVFA/L, no additional CODVFA was consumed indicating the sludge suffered from sulphide toxicity. However, after a month of operation, all CODVFA was oxidized by SRB. Clone sequencing results revealed that the SRB species differ between the 400 and 800 mgCODVFA/L fed reactors, indicating that a SRB were able to resist higher sulphide level were selected to achieve complete organic carbon removal. The achieved adaption of SRB to sulphide was accomplished by the occurrence of a new dominant species within the reactor. In conclusion, since 400 mg/L CODVFA is usual for domestic wastewater (resulting in 200 mg/L sulphide), no sulphide toxicity issues are expected for the application of SRB in domestic wastewater treatment. Final remarks This research contributed to a better understanding of SRB application in (domestic) wastewater treatment. It revealed that SRB could perform well at moderate climate additions, and no major bottlenecks were identified regarding saline black water treatment. Furthermore, fluctuation of acetate and propionate in the influent does not seem to affect the performance of SRB, and propionate seems even beneficial for SRB in preventing methanogenesis. In the future more attention should be given to the integration of these findings with the SANI process, as well as on the overall seawater toilet-flushing concept.","sulphate reducing bacteria; wastewater treatment; activated sludge; anaerobic process","en","doctoral thesis","","","","","","","","","Applied Sciences","Biotechnology","","","",""
"uuid:6a36269c-0a6c-445f-a9b4-fac7404ae45c","http://resolver.tudelft.nl/uuid:6a36269c-0a6c-445f-a9b4-fac7404ae45c","The impact of lateral house connections on the serviceability of sewer systems","Post, J.A.B.; Pothof, I.W.M.; Ten Veldhuis, J.A.E.; Langeveld, J.G.; Clemens, F.H.L.R.","","2014","It remains unknown how lateral house connections affect the performance of the sewer system, since the assessment of serviceability is mainly based on the state of the main sewer system. Further insight into the contribution of lateral house connections to the overall level of service provided can aid to target investments to parts of the system where it is most effective. To this end, techniques from the reliability theory were applied to a commercial sewer maintenance database to quantify the impact of lateral house connections on the serviceability of sewer systems. Analysis of the data showed that the failures follow a Poisson distribution. A comparison of the derived failure rate with values obtained from a different study revealed that the blockage rate of lateral house connections is an order of magnitude greater than the failure rate of the dominant mechanism of main sewer systems, thereby making the impact of lateral house connections on the serviceability of sewer systems substantial.","serviceability; reliability theory; lateral house connections; blockage database; Poisson process","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:c88e9b2c-d77e-4886-a5f6-5a749f986f0a","http://resolver.tudelft.nl/uuid:c88e9b2c-d77e-4886-a5f6-5a749f986f0a","Quick assessment tool for assurance of structural safety in the building process","Terwel, K.C.; Jansen, S.J.T.","","2014","From forensic investigation it is known that many structural failures can be attributed to human errors and organizational factors. To provide project leaders with information on the current state of factors in the building process influencing structural safety, we developed a quick assessment tool. Logistic regression was used, based on data of influencing factors from a national questionnaire, to derive a function that predicts the probability of a successful outcome, regarding structural safety. The results show that a function with only the factors collaboration, risk analysis and control could predict a successful project correctly in 85% of cases, with collaboration as most determining factor. Although this method has limitations, it gives a quick indication of the degree in which problems regarding structural safety are to be expected. We believe that this tool has the potential to develop into a risk management tool.","risk management; structural safety; building process; quality assurance","en","conference paper","IABSE","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:03453047-d336-470e-9290-a1bbf5bb6b32","http://resolver.tudelft.nl/uuid:03453047-d336-470e-9290-a1bbf5bb6b32","Challenges in the design of smart product-service systems (PSSs): Experiences from practitioners","Valencia Cardona, A.M.; Mugge, R.; Schoormans, J.P.L.; Schifferstein, H.N.J.","","2014","Smart Product-?Service Systems (Smart PSSs) are market offerings that integrate products and services into one single solution through the implementation of IC technology. Smart PSSs allow organizations to develop relationships with consumers in new ways and have a growing presence in the marketplace. As designers’ involvement in the design of these offerings is likely to increase, the understanding of the challenges emerging from the integration of product and service is of increasing relevance for the effective management of the design process. To identify the challenges in the design of Smart PSSs, interviews with ten practitioners from various companies with experience in the design of Smart PSSs were conducted. Based on the findings, we outline seven challenges: defining the value proposition, maintaining the value proposition over time, creating high-?quality interactions, creating coherence in the Smart PSS, stakeholder management, the clear communication of goals, and the selection of means and tools in the design process. Furthermore, we outline five ways in which designers can contribute to the design process through the use of their capacities: designers as foreseers of future scenarios, as guardians of experiences, as integrators of stakeholders’ needs, as problem solvers, and as visualizers of goals.","smart; product-service system; challenge; design; process","en","conference paper","Design Management Institute","","","","","","","","Industrial Design Engineering","Product Innovation Management","","","",""
"uuid:1e55ca20-c0b2-449f-904b-4921d04189ae","http://resolver.tudelft.nl/uuid:1e55ca20-c0b2-449f-904b-4921d04189ae","Optical coherence elastography for measuring the deformation within glass fiber composite","Liu, P.; Groves, R.M.; Benedictus, R.","","2014","Optical coherence elastography (OCE) has been applied to the study of microscopic deformation in biological tissue under compressive stress for more than a decade. In this paper, OCE has been extended for the first time, to the best of our knowledge, to deformation measurement in a glass fiber composite in the field of nondestructive testing. A customized optical coherence tomography system, combined with a mechanical loading setup, was developed to provide pairs of prestressed and stressed structural images. The speckle tracking algorithm, based on 2D cross correlation, was used to estimate the local displacements in micrometer scale. The algorithm was first evaluated by a test of rigid body translation. Then the experiments were carried out with the tensile test and three point bending on a set of glass fiber composites. The structural features and structural variations during the mechanical loadings are clearly observed with the presented displacement maps. The advantages and prospects for OCE application on glass fiber composites are discussed at the end of this paper.","optical coherence tomography; tomographic image processing; speckle imaging; nondestructive testing","en","journal article","Optical Society of America","","","","","","","","Aerospace Engineering","Aerospace Structures & Materials","","","",""
"uuid:8783bb66-5594-4fb2-a8f8-e4689c7707d7","http://resolver.tudelft.nl/uuid:8783bb66-5594-4fb2-a8f8-e4689c7707d7","In-Plane Displacement Detection With Picometer Accuracy on a Conventional Microscope","Kokorian, J.; Buja, F.; Van Spengen, W.M.","","2014","In this paper, we present a new method for detecting in-plane displacements in microelectromechanical systems (MEMS) with an unprecedented sub-ångström accuracy. We use a curve-fitting method that is commonly employed in spectroscopy to find peak positions in a spectrum. We fit a function to the intensity profile of the image of a silicon beam that was captured with a CCD camera on an optical microscope. The position resolution depends on the amount of pixel noise and on how the moving feature is spread across the detector pixels. The resolution is usually limited by photon shot noise, which can be controlled and lowered in several ways. To demonstrate the technique we measure the adhesion snap-off of two silicon surfaces. We assess the accuracy of the technique using two different silicon MEMS devices and an experimental ultrananocrystalline diamond device. The lowest position noise that we report is obtained by summing 1,577 image lines and is as low as 60 pm root mean square.","displacement measurement; MEMS; optical noise; optical image processing; optical position measurement; subpixel resolution","en","journal article","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","",""
"uuid:1778264b-c15c-49cf-9833-e95f5ec58194","http://resolver.tudelft.nl/uuid:1778264b-c15c-49cf-9833-e95f5ec58194","Design Principles for Improving the Process of Publishing Open data","Zuiderwijk, A.M.G.; Janssen, M.F.W.H.A.; Choenni, R.; Meijer, R.F.","","2014","· Purpose: Governments create large amounts of data. However, the publication of open data is often cumbersome and there are no standard procedures and processes for opening data. This blocks the easy publication of government data. The purpose of this paper is to derive design principles for improving the open data publishing process of public organizations. · Design/methodology/approach: Action Design Research (ADR) was employed to derive design principles. The literature was used as a foundation, and discussion sessions with civil servants were used to evaluate the usefulness of the principles. · Findings: Barriers preventing easy and low-cost publication of open data were identified and connected to design principles, which can be used to guide the design of an open data publishing process. Five new principles are 1) start thinking about the opening of data at the beginning of the process, 2) develop guidelines, especially about privacy and policy sensitivity of data, 3) provide decision support by integrating insight in the activities of other actors involved in the publishing process, 4) make data publication an integral, well-defined and standardized part of daily procedures and routines, 5) monitor how the published data are reused. · Research limitations/implications: The principles are derived using ADR in a single case. A next step can be to investigate multiple comparative case studies and detail the principles further. We recommend using these principles to develop a reference architecture. · Practical implications: The design principles can be used by public organizations to improve their open data publishing processes. The design principles are derived from practice and discussed with practitioners. The discussions showed that the principles could improve the publication process. · Social implications: Decreasing the barriers for publishing open government data could result in the publication of more open data. These open data can then be used and stimulate various public values, such as transparency, accountability, innovation, economic growth and informed decision and policy-making. · Originality/value: Publishing data by public organizations is a complex and ill-understood activity. The lack of suitable business processes and the unclear division of responsibilities blocks publication of open data. This paper contributes to the literature by presenting design principles which can be used to improve the open data publishing process of public sector organizations.","open data; e-government; publishing process; principles; business process reengineering; action design research","en","journal article","Emerald Group Publishing Limited","","","","","","","","Technology, Policy and Management","Engineering Systems and Services","","","",""
"uuid:bb7aada7-2093-4c86-a6c3-459557ead76c","http://resolver.tudelft.nl/uuid:bb7aada7-2093-4c86-a6c3-459557ead76c","Batch-to-batch learning for model-based control of process systems with application to cooling crystallization","Forgione, M.","Van den Hof, P.M.J. (promotor)","2014","From an engineering perspective, the term process refers to a conversion of raw materials into intermediate or final products using chemical, physical, or biological operations. Industrial processes can be performed either in continuous or in batch mode. There exist for instance continuous and batch units for reaction, distillation, and crystallization. In batch mode, the raw materials are loaded in the unit only at the beginning of the process. Subsequently, the desired transformation takes place inside the unit, and the products are eventually removed altogether after the processing time. In order to obtain the desired production volume, several batches are repeated. In an industrial process, several variables such as temperatures, pressures, and concentrations have to be regulated in order to ensure safety, maintain the product quality, and optimize economic criteria. In principle, model-based control techniques available in the literature could be systematically utilized in order to achieve these goals. However, a limitation to the applicability of model-based techniques for batch process control is that the available models of batch processes often suffer from severe uncertainties. In this thesis, we have investigated the use of measured data in order to improve the performance of model-based control of batch processes. Our approach consists in using the measured data in order to refine from batch to batch the model that is used to design the controller. By doing so, the performance delivered by the model-based controller is expected to improve. We have developed the parametric model update technique Iterative Identification Control (IIC) and non-parametric model update technique Iterative Learning Control (ILC). While in IIC the measured batch data are used to update from batch to batch parameter estimates for the uncertain physical coefficients, in ILC the data are used to compute a non-parametric, additive correction term for a nominal process model. We have tested the ILC and IIC algorithms for the batch cooling crystallization process both in a simulation environment and on a real pilot-scale crystallization setup. We have shown that the two approaches have complementary advantages. On the one hand, the parametric approach allows for a faster learning since it produces a parsimonious representation of the process. On the other hand, the nonparametric approach can cope effectively with the serious issue of structural mismatches owing to the use of a more flexible representation. Furthermore, we have investigated the use of excitation signals to enhance the performance of parametric model update techniques in an iterative identification/controller design scheme similar to IIC. The excitation signals have a dual effect on the overall control performance. On the one hand, the application of an excitation signal superposed to the normal control input leads after identification to an increased model accuracy, and thus a better control performance. On the other hand, the excitation signal also causes a temporary performance degradation, since it acts as a disturbance while it is applied to control system. For linear dynamical systems, we have shown that the problem of designing the excitation signals aiming to maximize the overall control performance can be approximated as a convex optimization problem. The lack of generally applicable and computationally efficient experiment design tools for nonlinear systems is the main bottleneck for the optimal design of the excitation signals in the case of batch processes. In this thesis, we have developed a novel experiment design method applicable to the class of fading memory nonlinear system. Limiting the excitation signals to a finite number of levels, the information matrix can be expressed as a linear function of the frequency of occurrence of each possible pattern having duration equal to the memory of the system. Exploiting the linear relation between the frequencies and the information matrix, several experiment design problems can be formulated as convex optimization problems.","Iterative Learning Control; Iterative Identification Control; Identification for Control; Batch cooling crystallization; Experiment Design; Process Control","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","DCSC","","","",""
"uuid:27899eb2-f29a-4a83-a229-8fbeec6a2b3c","http://resolver.tudelft.nl/uuid:27899eb2-f29a-4a83-a229-8fbeec6a2b3c","Advances in Model-Based Design of Flexible and Prompt Energy Systems -- The CO2 Capture Plant at the Buggenum IGCC Power Station as a Test Case","Trapp, C.","Colonna, P. (promotor)","2014","Pre-combustion CO2 capture applied to integrated gasification combined cycle (IGCC) power plants is a promising technical solution to reduce CO2 emissions due to fossil-fuelled electricity generation in order to meet environmental targets in a carbon-constrained future. The pre-combustion capture process allows to effectively remove CO2 from synthetic gas prior its combustion at high partial pressures. In addition, the net energy efficiency of decarbonised IGCC plants is estimated to be higher than that of conventional pulverized coal steam power plants integrating carbon capture. However, the removal of CO2 leads to a high efficiency penalty for the thermal power plant and an increase in system complexity. Moreover, the integration of carbon capture into the very complex gasification process and combined cycle power plant leads to technical problems as far as dynamic operation is concerned. Transient performance of future IGCC power plants becomes extremely relevant in order to balance the rapidly growing share of electricity converted from inherently intermittent renewable sources, such as wind and solar energy. The work documented in this thesis was part of a larger research project involving the utility company Vattenfall, the Energy research Centre of the Netherlands (ECN) and the Delft University of Technology aimed at the development of pre-combustion CO2 capture technology to be applied in a future commercial-scale IGCC power plant. A unique, fully instrumented CO2 capture pilot plant was realized at the Buggenum IGCC power station in the Netherlands in order to demonstrate the technology, to investigate its performance and to generate data for model validation. The most relevant research objectives of this thesis are to improve and develop general tools and methodologies which i) facilitate detailed steady-state performance analysis and sophisticated optimization of process design and operating conditions and ii) enable studies on process dynamics already during the early design phase in order to support the choice of equipment and control strategies aiming at the improvement of transient performance. The tools and methods are developed for the case-specific analysis of the pre-combustion CO2 capture plant at the Buggenum IGCC power station. With respect to generalization, it is worth to highlight that the adopted system engineering techniques and tools are applicable to the design of a larger class of chemical and energy conversion systems with minor changes.","pre-combustion CO2 capture; IGCC power plants; process design optimization; dynamic performance","en","doctoral thesis","","","","","","","","2014-06-16","Aerospace Engineering","Aerodynamics, Wind Energy, Flight Performance and Propulsion","","","",""
"uuid:9cbba7af-29da-4b70-8369-20271c2ca69b","http://resolver.tudelft.nl/uuid:9cbba7af-29da-4b70-8369-20271c2ca69b","A Multilevel Design Model: The mutual relationship between product-service system development and societal change processes","Joore, J.P.; Brezet, J.C.","","2014","Change actors like designers play a strategic role in innovation and transition processes towards a sustainable society. They act at all levels of society and need help to find their way through increasingly interrelated innovation systems. To support their efforts, there is a need for a design supportive model that (1) can provide insight into the development of new products and product-service systems, as well as in developments that occur in society as a whole; (2) can provide insight into the relationship between functional problems on the one hand, and more abstract societal problems on the other; (3) describe design processes, change processes and transition processes in a consistent, mutually comparable manner that can potentially be used to structure future design-based initiatives. In this paper a Multilevel Design Model (MDM) is discussed, combining two specific functionalities: First a cyclic iterative design approach that may be generic enough to describe both the design of physical artefacts and the design of product-service systems, as well as the way that complex societal change processes may occur. Second a hierarchical systems approach, where on each aggregation level a similar description of the design, change or transition process is applied. The MDM is discussed by means of a simulated case example in the area sustainable transportation and electric transport, explaining the model may indeed be useful to describe and potentially explain some of the dilemmas that occur during the course of complex design processes.","innovation; multilevel design process; transition management; sustainable transport; electric vehicle; product-service system","en","journal article","Elsevier","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:8a49e66f-440f-4393-9992-1ba15c467d49","http://resolver.tudelft.nl/uuid:8a49e66f-440f-4393-9992-1ba15c467d49","On the Generic Utilization of Probabilistic Methods for Quantification of Uncertainty in Process-based Morhpodynamic Model Applications","Scheel, F.; De Boer, W.P.; Brinkman, R.; Luijendijk, A.P.; Ranasinghe, R.W.M.R.J.B.","","2014","A variety of uncertainty sources are inherent in process-based morphodynamic modelling applications. There is an increasing demand for the quantification of these uncertainties. This contribution introduces a probabilistic-morphodynamic (PM) modelling framework that enables this quantification. The PM modelling framework provides a systematic approach, while also lowering the required effort for inclusion of uncertainty quantification in morphodynamic model studies. Applicability and added value is shown using a pilot application to the Holland coast.","morphodynamics; process-based; morphodynamic models; uncertainties; probabilistic; uncertainty quantification; Unibest CL+; Holland coast; ICCE 2014","en","conference paper","Coastal Engineering Research Council","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:ddaf5ce1-8e45-4e8d-a1db-47bc4e74e04c","http://resolver.tudelft.nl/uuid:ddaf5ce1-8e45-4e8d-a1db-47bc4e74e04c","Dynamic light scattering from pulsatile flow in the presence of induced motion artifacts","Nemati, M.; Presura, C.N.; Urbach, H.P.; Bhattacharya, N.","","2014","Continuous health monitoring has become a major theme of our aging society. Portable devices play an important role here. Many optical portable devices are susceptible to motion induced artifacts. We have performed an experimental study for detection of fluid pulsation based on multi-exposure speckle images, in presence of motion induced artifacts. Induced motion of a wide range of frequencies and amplitudes were generated to resemble sensor motion with respect to skin. The data was analyzed using speckle contrast and correlation. We concluded that both techniques have their own advantages, depending on the measurement configuration. A study of angles between illumination and detection revealed that larger angles yields better signal. Shorter exposure time was more successful in extracting the signal. We also performed in-vivo measurements that agree with the in-vitro case. We also show that a minimum collection of two pixels from the speckle image is sufficient to extract relevant results.","image processing; medical optics and biotechnology; optical devices; scattering; OA-Fund TU Delft","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","ImPhys/Imaging Physics","","","",""
"uuid:b8efd285-23a5-4ecc-a839-0f6ddc8e1298","http://resolver.tudelft.nl/uuid:b8efd285-23a5-4ecc-a839-0f6ddc8e1298","Size and shape control of sub-20 nm patterns fabricated using focused electron beam-induced processing","Hari, S.; Hagen, C.W.; Verduin, T.; Kruit, P.","","2014","In a first study to analyze the feasibility of electron beam-induced deposition (EBID) for creating certain patterns in advanced lithography, line patterns were fabricated on silicon wafers using EBID. The growth conditions were such that the growth rate is fully determined by the electron flux (the current limited growth regime). It is experimentally verified that different patterning strategies, such as serial versus parallel patterning and single pass patterning versus multiple pass patterning, all lead to the same result in this growth regime. Images of EBID lines, imaged in a scanning electron microscope, were analyzed to determine the position of the lines, the width of the lines, and the linewidth roughness (LWR). The results are that the lines have an average width of 13.7 nm, an average standard deviation of 1.6 nm in the center position of the lines, and an average LWR of 4.5 nm (1? value). As an example of the capabilities of EBID, a logic-resembling lithography pattern was fabricated","electron beam-induced deposition; focused electron beam-induced processing nanofabrication; line edge roughness; patterning; lithography; e-beam lithography","en","journal article","SPIE","","","","","","","","Applied Sciences","Imaging Physics","","","",""
"uuid:56253974-e017-498e-8e45-3b5f4f63f682","http://resolver.tudelft.nl/uuid:56253974-e017-498e-8e45-3b5f4f63f682","Coherent Fourier scatterometry for detection of nanometer-sized particles on a planar substrate surface","Roy, S.; Assafrao, A.C.; Pereira, S.F.; Urbach, H.P.","","2014","Inspection tools for nano-particle contamination on a planar substrate surface is a critical problem in micro-electronics. The present solutions are either expensive and slow or inexpensive and fast but have low sensitivity because of limitations due to diffraction. Most of them are also substrate specific. In this article we report how Coherent Fourier Scatterometry is used for detection of particles smaller than ?/4. Merits of the technique, especially, the procedures to improve SNR, its flexibility and its robustness on rough surfaces are discussed with simulated and experimental results.","optical inspection; Fourier optics and signal processing; scattering measurements; scanning microscopy","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","ImPhys/Imaging Physics","","","",""
"uuid:212bf75a-24e9-4944-a07c-aaa51e7a2e7e","http://resolver.tudelft.nl/uuid:212bf75a-24e9-4944-a07c-aaa51e7a2e7e","A systematic approach to adressing the influence of man-machine interaction on situation awareness","Van Doorn, E.C.; Horvath, I.; Rusak, Z.","","2014","This paper presents a systematic approach towards the study of required situation awareness (RSA) in traffic management context. Current theories are not suitable to clearly define the RSA in complex man-machine interaction (MMI) contexts. Deficiencies of man-machine interaction are difficult to recognize and resolve. This paper analyzes: (i) how the individual, task and system factors define the MMI, (ii) how the MMI and information needs influence the assessment of situation awareness (SA), and (iii) what influence they together have on the required situation awareness. This paper presents a structured analysis scheme, developed to gain more holistic body of knowledge about SA. This scheme was applied in traffic management context to structure cognitive task analysis sessions and to gain insight in SA in complex MMI context. It helps to better understand which information needed by the operator is part of SA, and which information needs to be available, but will not be part of operator’s SA. Future research will focus on developing interface solutions for an awareness-enhancing informing.","required situation awareness; man-machine interaction; information processing; analysis scheme; applied cognitive task analysis; traffic management","en","conference paper","Faculty of Industrial Design Engineering, Delft University of Technology","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:910fbe01-d6f6-4a58-87a9-b2420b76fe2b","http://resolver.tudelft.nl/uuid:910fbe01-d6f6-4a58-87a9-b2420b76fe2b","Development of a framework for information acquisition and processing in cyber-physical systems","Li, Y.; Song, Y.; Horvath, I.; Opiyo, E.Z.; Zhang, G.","","2014","In the designing and modeling of CPSs, the information acquisition and processing processes are often application dependent and process oriented. Those information management frameworks are simple and effective for small scale systems. However, many functions developed are not reusable or cannot be directly re-used, when a large number of details and relations need to be added. Aiming at designing a flexible and scalable system with “plug-and-play” components, a preliminary information acquisition and processing framework for CPSs is proposed in this paper based on the object oriented design (OOD) method. The concept of informational hierarchy within CPSs is identified first. Then it is further elaborated as instantaneous information, dynamic information and context information. Using these three types of information, together with the physical properties of a component in CPSs, the concept of hybrid object is proposed as the basic component of the proposed framework. By defining the inherent and update operation of hybrid objects, the proposed information acquisition and processing framework is formed with hierarchical hybrid objects. To verify the effectiveness and the efficiency of the proposed framework, a case study on designing and modeling a gas metal arc welding (GMAW) based rapid manufacturing system is presented. Limitations of the proposed framework and future research directions are discussed as well.","Information acquisition and processing framework; object oriented design; hybrid object; plug-and-play; cyber-physical systems","en","conference paper","TMCE","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:8802f27d-7942-401e-a85e-9f2f52009a31","http://resolver.tudelft.nl/uuid:8802f27d-7942-401e-a85e-9f2f52009a31","Product design for elderly-visual design information inspired a new perspective in design education","Langeveld, L.H.","","2014","A workshop Product Design for Elderly was held in Beijing and organized by the School of Digital Media and Design, Beijing University of Posts and Telecommunications. A domestic appliances company had sponsored a part of the workshop and brought in the topics. The objectives of the workshop were creating a product design by each design team. and gaining design competence with use of visual design information. The work method was based on the sequential design process of Pahl and Beitz. The archetype of process design process was the building stone for the proposed product design for elderly process, which had a sequential character and feedback options. During the workshop were held three questionnaires for observation and experience research with a new perspective in design education. Communication and taking design decisions are the two design activities that run through the entire process. The go decision appeared to be crucial, so it could be defined as a separate design step. Visual information will be gathered among other things with a digital camera by taking pictures and making movies. This is “walking design”. The idea generation is based on the gathered visual design information, the problem definition, the design brief and design space that defines the boundaries of the design solution. The concept development can be well presented in a visual way. Cost estimation is also included in the proposed process.","elderly capability; design process; design method; product design; visual design information","en","conference paper","Delft University of Technology","","","","","","","","Industrial Design Engineering","Industrial Design","","","",""
"uuid:0cbaeec4-f8c2-4523-a50a-8946606be023","http://resolver.tudelft.nl/uuid:0cbaeec4-f8c2-4523-a50a-8946606be023","Scenarios for effective climate change adaptation in Dutch social housing","Roders, M.J.; Straub, A.","","2014","Housing managers are constantly confronted with the changing demands that their building stock has to comply with. One of the change agents is the changing climate, caused primarily by human induced greenhouse gases. Though, even if the emissions of all these gases could now be put to a hold, the process of climate change would not completely cease. Furthermore, the impacts of climate change would most probably be felt for many more years. In urban areas, the impacts are drought, flooding caused by extreme precipitation and heat stress caused by the urban heat island effect. Besides threatening the building stock, climate change is also threatening the quality of life of people in urban environments. In the Netherlands, housing associations are responsible for managing the social housing stock and maintaining their quality of life. Research has proven they are not yet aware of the challenge that lies ahead to adapt their dwellings to a changing climate. Considering the focus on the physical adaptations of the building stock, it was chosen to discuss in this paper the effectiveness of three types of governance strategies that housing associations can directly apply in their maintenance processes. The governance strategies are hypothesised based on research results of earlier studies on the implementation of climate change adaptations in social housing. The strategies are: S1. Take up climate adaptation in the policy developments that guide the overall management of the stock; S2. Involving actors that traditionally stand aside the construction process, such as insurance companies and water boards; S3. Emphasising performance-based procurement stimulating the execution of the projects in a partnering approach. The effectiveness of the strategies was tested by means of a SWOT analysis per strategy with practitioners. Results are five scenarios, based on the combinations of strategies that are potentially feasible for the implementation of climate change adaptation measures in the Dutch social housing stock. A crucial factor in the scenarios is collaboration, because nowadays a housing association is not (financially) capable of assuming the responsibility of climate proofing its housing stock all by itself.","adaptation; climate Change; construction Process; policy Development; social Housing","en","conference paper","CIB/The University of Salford","","","","","","","","Architecture and The Built Environment","OTB","","","",""
"uuid:679b3977-ae01-4a6e-a2d2-5c2fad9707a6","http://resolver.tudelft.nl/uuid:679b3977-ae01-4a6e-a2d2-5c2fad9707a6","A method for Precedent Analysis of Spatial Artefacts","Guney, A.","","2014","This paper will treat two issues regarding innovative/ creative morphological analysis of spatial artifacts in relation to their Form, Operation and Performance. One will be about precedents and their usage in the design process, analogically; and second will be an example of a comparative (architectural) precedent analysis of two buildings of the same architect(office). Learning by analogy is a powerful method, in general. Analogy has two domains basically; one is source and the other is target domain; thus, design domain will be target domain and that of analysis, source. I will try to show how we can use the structured- analogical source knowledge in spatial design process; target domain. This paper will go in depth with the creative analogy in terms of constraints of similarity, structure, and purpose as (Holyoak and Thagard, 1996) put it. There will also be presented a schematic paradigm about creativity through analogical and other creative mental behaviors like: defamiliarization, circumscribing, mental leaps, metaphor, simile, mimesis and aesthetical judgment, etc. Each spatial artifact has a form, operation(working of the function; thus, not function alone) and a performance most of which is normative. Form will be analyzed and represented in terms of its spatial relationships, organizations, its physical properties(its structure, day- light quality, geometry, mass and abstraction of these properties as parti (dominant underlying characteristics of the artifact, in terms of form, at hand), and its topological(non-metric) properties; accessibility of its consisting building blocks and spaces. Operation will basically represent how spatial divisions and blocks possibly could be used best and see if their working of the function match with the actual ends of the artifact at hand. Performance will represent performative properties in relation to operation and form; how good/ bad it operates and also evaluating how the form is emerged in relation to its context, spatial quality and aesthetics. A schematic diagram of form, operation and performance can be shown like: F(m) – O – P. In the process of analysis we can observe whether the form will or will not afford operation, and operation performance; in design process, performance will ask for affordances from operation and operation from form(morph). This mutual working of design and analysis will be explained at some levels of design phases; concept, pre-parametric sketch, parti(pre-parametric design), parametric alternatives and; eventually the definitive design. Finally, the analysis of these two buildings will be compared with each other and a conclusion will be inferred, respectively.","knowledge; precedent analysis; creative design process; cognitive structure; analogy; source and target domain","en","conference paper","IOS Press","","","","","","","2016-04-30","Architecture and The Built Environment","Urbanism","","","",""
"uuid:7ed2c606-2b15-49b8-9871-abc9c2aa3de2","http://resolver.tudelft.nl/uuid:7ed2c606-2b15-49b8-9871-abc9c2aa3de2","Data-driven architectural production and operation","Bier, H.H.; Mostafavi, S.","","2014","Data-driven architectural production and operation as explored within Hyperbody rely heavily on system thinking implying that all parts of a system are to be understood in relation to each other. These relations are increasingly established bi-directionally so that data-driven architecture is not only produced (created or designed and fabricated) by digital means but also is incorporating digital, sensing-actuating mechanisms that enable real-time interaction with (natural and artificial) environments and users. Data-driven architectural production and operation exploit, in this context, the generative potential of process-oriented approaches wherein interactions between (human and non-human) agents and their (virtual and physical) environments have emergent properties that enable proliferation of hybrid architectural ecologies.","Data-driven Design; Generative Systems; Design Information Modeling; Emergent Design Processes","en","conference paper","Bertalanffy Center for the Study of Systems Science (BCSSS)","","","","","","","","Architecture and The Built Environment","Architectural Engineering +Technology","","","",""
"uuid:ff561d27-4f2e-4d7f-8245-6b86dbb50e10","http://resolver.tudelft.nl/uuid:ff561d27-4f2e-4d7f-8245-6b86dbb50e10","Common Platform Dilemmas: Collective Action and the Internet of Things","Nikayin, F.A.","Tan, Y.H. (promotor); De Reuver, G.A. (promotor)","2014","","Platforms; Internet of Things; Home Energy Management; E-health; Collective Action; Ecosystem; Collaboration; Case Study; Analytic Hierarchy Process","en","doctoral thesis","Next Generation Infrastructure Foundation","","","","","","","","Technology, Policy and Management","Engineering Systems and Services","","","",""
"uuid:79b51e32-7071-478b-ad0b-12ccac8f01bc","http://resolver.tudelft.nl/uuid:79b51e32-7071-478b-ad0b-12ccac8f01bc","Annealing of SnO2 thin films by ultra-short laser pulses","Scorticati, D.; Illiberi, A.; Bor, T.; Eijt, S.W.H.; Schut, H.; Römer, G.R.B.E.; De Lange, D.F.; Huis in 't Veld, A.J.","","2014","Post-deposition annealing by ultra-short laser pulses can modify the optical properties of SnO2 thin films by means of thermal processing. Industrial grade SnO2 films exhibited improved optical properties after picosecond laser irradiation, at the expense of a slightly increased sheet resistance [Proc. SPIE 8826, 88260I (2013)]. The figure of merit ? = T10 / Rsh was increased up to 59% after laser processing. In this paper we study and discuss the causes of this improvement at the atomic scale, which explain the observed decrease of conductivity as well as the observed changes in the refractive index n and extinction coefficient k. It was concluded that the absorbed laser energy affected the optoelectronic properties preferentially in the top 100-200 nm region of the films by several mechanisms, including the modification of the stoichiometry, a slight desorption of dopant atoms (F), adsorption of hydrogen atoms from the atmosphere and the introduction of laser-induced defects, which affect the strain of the film.","laser materials processing; ultrafast lasers; subwavelength structures; nanostructures; transparent conductive coatings; solar energy","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","RST/Radiation, Science and Technology","","","",""
"uuid:d6b8a31e-71f5-4509-adca-9cd672432c1e","http://resolver.tudelft.nl/uuid:d6b8a31e-71f5-4509-adca-9cd672432c1e","Multimodal Surveillance: Behavior analysis for recognizing stress and aggression","Lefter, I.","Jonker, C.M. (promotor)","2014","Nowadays, camera systems are installed in military areas as well as in public spaces like schools, shopping malls, airports, and football stadiums. Human operators are monitoring the screens, looking for any signs of unwanted behavior and negative incidents. The task requires working personnel 24/7. With the ever increasing number of cameras, surveillance operators become overloaded. The nature of the task to constantly watch screens and the sparsity of notable events are bound to decrease the operators' focus. Furthermore, some events are hard to distinguish by video only: severe events such as gunshots and screams are much easier to hear than see. For these reasons, negative events may go by unnoticed and typically the recorded footage is inspected after the fact. A solution to these problems is the development of automatic multimodal (audio-visual) surveillance systems, which was the aim of this research thesis. Such systems should not take over the decisions of the operators, but should assist them in identifying unwanted behaviour. Operators would be notified when and where to focus. This is likely to reduce the number of missed events caused by screen prioritising or external and internal distractions. It is important to note that such a system should not be limited to recognizing violence. It has been shown that negative emotions and stress might precede aggression. Recognizing them in an early stage is very relevant since adopting proper measures at an early time can prevent the situation from escalating. Therefore, in this research thesis, besides a variety of manifestations of aggression, we have focused on automatically recognizing stress. Our aim was to design and implement a surveillance system that is able to emulate human perception. For that reason, we asked people to annotate stress and aggression on audio-visual recordings. We investigated several approaches to compute their annotations automatically. Recordings from real surveillance cameras are in general not available due to privacy reasons. We had to construct our own datasets. In order to ensure a high degree of realism as well as sufficient samples of stress and aggression, we have designed scenarios and hired semi-professional actors to play them. The actors were free to improvise after they received roles and short scenario descriptions. We have recorded stressful scenes at service desk and aggression related scenarios in a train and train station. To automatically recognize the stress and aggression levels, we have extracted acoustic, linguistic and visual features, referred to as low-level features. Using classifiers, we trained models which can be used to make prediction of stress or aggression level on new data samples. One shortcoming of this approach is that there is a semantic gap between the low-level features and the high-level stress and aggression assessment. We have contributed by bridging the semantic gap with semantically-meaningful intermediate representations of the stress concept. The intermediate representation of stress consists of the degrees to which stress is conveyed by speech and gestures with respect to the semantic message and the way in which the semantic message is expressed (e.g. intonation for speech, speed, rhythm, tension for gestures). Adding such a representation as an intermediate level in the stress recognition architecture improves the stress assessment, especially when the level of stress is high. Having both audio and video offers the possibility to construct a more complete representation of the scene. The multimodal fusion approach is expected to be a solution to deal with the shortcomings of each modality (e.g. noise for audio, occlusion for video). Despite the expected benefits, fusing information coming from different modalities is challenging. Typical problems are that some pieces of information are only apparent in one modality (e.g. verbal fight), and that multiple people in the scene can have different behaviors which might lead to different assessments based on where the focus is. These problems can lead to incongruent, or even contradicting information from the different modalities, which makes coming to the correct interpretation hard. To deal with the problem of fusing incongruent information we have proposed and validated five meta-features: audio-focus, video-focus, context, semantics and history. The meta-features and the audio-only and video-only aggression assessments form the intermediate level of the aggression recognition model. This novel approach significantly improved automatic aggression recognition by multimodal fusion.","automatic surveillance; multimodal information fusion; audio processing; video processing; emotion; stress; aggression recognition","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","","",""
"uuid:590a0672-6778-46b3-8623-1d1e73c7235d","http://resolver.tudelft.nl/uuid:590a0672-6778-46b3-8623-1d1e73c7235d","Full-scale partial nitritation/anammox experiences: An application survey","Lackner, S.; Gilbert, E.M.; Vlaeminck, S.E.; Joss, A.; Horn, H.; Van Loosdrecht, M.C.M.","","2014","Partial nitritation/anammox (PN/A) has been one of the most innovative developments in biological wastewater treatment in recent years. With its discovery in the 1990s a completely new way of ammonium removal from wastewater became available. Over the past decade many technologies have been developed and studied for their applicability to the PN/A concept and several have made it into full-scale. With the perspective of reaching 100 full-scale installations in operation worldwide by 2014 this work presents a summary of PN/A technologies that have been successfully developed, implemented and optimized for high-strength ammonium wastewaters with low C:N ratios and elevated temperatures. The data revealed that more than 50% of all PN/A installations are sequencing batch reactors, 88% of all plants being operated as single-stage systems, and 75% for sidestream treatment of municipal wastewater. Additionally an in-depth survey of 14 full-scale installations was conducted to evaluate practical experiences and report on operational control and troubleshooting. Incoming solids, aeration control and nitrate built up were revealed as the main operational difficulties. The information provided gives a unique/new perspective throughout all the major technologies and discusses the remaining obstacles.","partial nitritation; anammox; deammonification; process stability; sequencing batch reactor; biofilm; granula","en","journal article","Elsevier","","","","","","","","Applied Sciences","BT/Biotechnology","","","",""
"uuid:b55c74ff-5e6f-4228-823a-cb9cff425a38","http://resolver.tudelft.nl/uuid:b55c74ff-5e6f-4228-823a-cb9cff425a38","The building process as a chain of displacements - Following a construction project from strategic planning through an architectural competition to the building permit","Silberberger, J.; Strebel, I.; Tränkle, P.","","2014","While research on architectural competitions can be considered a well-established field nowadays, research on the transition between the competition procedure and the subsequent project phase remains fragmentary. The paper at hand aims at addressing this gap. Standing in the tradition of actor-network theory (Callon 1986; Law and John 2004; Latour 2005) the paper is attentive to the various displacements that shape a construction project from strategic planning and preliminary studies to the end of the competition procedure and then through the subsequent project phase. In this way, the paper embeds the architectural competition into the building process and elaborates a perspective on the latter as a set of intertwining procedures that constantly assess and re-define the construction project","architectural competition; project phase; planning process; building process; actor-network theory","en","conference paper","","","","","","","","","","","","","",""
"uuid:e5219828-779c-4d26-988f-8195911de22a","http://resolver.tudelft.nl/uuid:e5219828-779c-4d26-988f-8195911de22a","Architectural competitions as a municipal instrument for innovating space for the ageing society: the dynamics of three competitions","Andersson, J.E.","","2014","Sweden is entering the ageing society. On a national level, and in a cyclic process with a time lap of 30 to 40 years, three architecture competitions have been realized during the 20th century in order to renew spatial thinking concerning housing for dependent and frail persons in need of daily care and caring, in the following termed Residential Care Homes, RCH. During the first years of the 21st century, the number of available flats in a RCH dropped with 23 per cent. As a result, the matter of appropriate housing for frail older people entered the political agenda. In 2010, the Swedish government launched the governmental program Growing Old, Living well, GOLW, in order to explore residential housing for the emerging ageing society. In the program, architecture competitions were recognized as a method for innovating architecture and the built environment. This study is a parallel case study on three municipal organizers considerations and preparations for organizing invited architecture competitions with prequalification. The research material consists of written documentation, questionnaires and interviews. All in all, 42 respondents participated, all actors in the municipal process of realizing either a pilot study in view of a subsequent architectural competition, or just the latter option. The assembled research material was submitted to a close reading analysis, which allowed for reconstructing the municipal organizational processes as to their dynamics. The study sheds light on how municipal actors work with the matter of organizing a competition, and gives an estimate of time necessary for planning one. The study lends support to assuming that the ideal time frame for organizing and realizing municipal competitions is approximately 21 months. A more compressed time line will generate problems that will be visible in the architects submitted proposals and the subsequent jury assessment process.","architecture competitions; competition programmes; organizational process; municipal stakeholders; housing for older persons","en","conference paper","","","","","","","","","","","","","",""
"uuid:75130c37-edb5-4a34-ac2f-c156d377ca55","http://resolver.tudelft.nl/uuid:75130c37-edb5-4a34-ac2f-c156d377ca55","Control, measurement and entanglement of remote quantum spin registers in diamond","Bernien, H.","Hanson, R. (promotor)","2014","A quantum network is the essential resource for distributed quantum computation and the enabling technology for secure quantum communication over large distances. Setting up such a network would require establishing quantum connections between local nodes which are capable of generating, processing and storing quantum information. Intensive research is carried out in laboratories around the world investigating suitable systems for implementing a quantum network. The experiments presented in this thesis explore the possible realisation with nitrogen vacancy (NV) centres in diamond. We study how these centres could serve as nodes in such a network and develop a toolbox that consists of quantum control and measurement acting on these nodes. Furthermore, we demonstrate how to connect nodes and create entanglement between two distant NV centres.","quantum information processing; NV-centres; entanglement","en","doctoral thesis","","","","","","","","2014-02-10","Applied Sciences","QN/Quantum Nanoscience","","","",""
"uuid:149e8df2-108e-40a8-b57b-f6f7889cc5d3","http://resolver.tudelft.nl/uuid:149e8df2-108e-40a8-b57b-f6f7889cc5d3","Bidirectional infrasonic ducts associated with sudden stratospheric warming events","Assink, J.D.; Waxler, R.; Smets, P.S.M.; Evers, L.G.","","2014","In January 2011, the state of the polar vortex in the midlatitudes changed significantly due to a minor sudden stratospheric warming event. As a result, a bidirectional duct for infrasound propagation developed in the middle atmosphere that persisted for 2 weeks. The ducts were due to two zonal wind jets, one between 30 and 50 km and the other around 70 km altitude. In this paper, using microbarom source modeling, a previously unidentified source region in the eastern Mediterranean is identified, besides the more well known microbarom source regions in the Atlantic Ocean. Infrasound data are then presented in which the above mentioned bidirectional duct is observed in microbarom signals recorded at the International Monitoring System station I48TN in Tunisia, from the Mediterranean region to the east and from the Atlantic Ocean to the west. While the frequency bands of the two sources overlap, the Mediterranean signal is coherent up to about 0.6 Hz. This observation is consistent with the microbarom source modeling; the discrepancy in the frequency band is related to differences in the ocean wave spectra for the two basins considered. This work demonstrates the sensitivity of infrasound to stratospheric dynamics and illustrates that the classic paradigm of a unidirectional stratospheric duct for infrasound propagation can be broken during a sudden stratospheric warming event.","infrasound; stratospheric warming; microbarom source modeling; propagation modeling; array processing; CTBT","en","journal article","American Geophysical Union","","","","","","","2014-08-05","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:14259a8a-8e72-462b-b284-c28e23b3a095","http://resolver.tudelft.nl/uuid:14259a8a-8e72-462b-b284-c28e23b3a095","The double focal transformation and its application to data reconstruction","Kutscha, H.","Gisolf, A. (promotor); Verschuur, D.J. (promotor)","2014","Many seismic data processing and imaging processes require densely and regularly sampled data, whereas the actual measurements are mostly irregularly and sparsely sampled. Therefore, seismic data reconstruction methods are utilised as a pre-processing step. Within the class of transformation-based reconstruction techniques, observed seismic data is decomposed into certain basis functions, such as plane waves, parabolas or curvelets. In the corresponding model space the aliasing noise is assumed to have different properties than the seismic signal and can be suppressed. However, in many cases subsurface information is available that cannot be used in these traditional reconstruction methods. Therefore, the double focal transformation was derived as a way to incorporate knowledge about the subsurface in the reconstruction algorithm. The basic principle of the double focal transformation is to focus seismic energy by a back-propagation of the seismic data at the source and receiver side to certain depth levels. As a result, the seismic data are represented by a limited number of samples in the focal domain in a localised area, whereas aliasing noise spreads out. By imposing a sparse solution in the focal domain, aliasing noise is suppressed and data reconstruction beyond aliasing is achieved. To facilitate the process, only a few effective depth levels need to be included, preferably along the major boundaries in the subsurface. Propagation operators from these boundaries to the surface (focal operators) serve as the basis functions of this data decomposition method. Including more depth levels allows a sparser data representation, and hence, increases the reconstruction capability. The more precise the subsurface information is known, the more accurate these propagators can be computed. However, very precise operators are not necessary for a good reconstruction result, because in the reconstruction step (the inverse focal transformation) the effect of these operators is again removed. The calculation of the double focal transformation requires a non-linear inversion process, where the samples in the focal domain are estimated such that they - after inverse transformation - match the input data at the measurement locations. Because the inversion process is under-determined, an extra constraint on the focal domain is applied, for which the minimum L1 norm is chosen. This forces the distribution in the focal domain to be sparse and - thereby - suppresses the aliasing noise. For the inversion a so-called spgl1 solver has been used that is guaranteed to converge to the desired minimum of the defined objective function. It utilises a steepest decent type iterative process, called Spectral Projected Gradient. Seismic data reconstruction via the double focal transform method appears to be robust against inaccuracies in the focal operators up to roughly ten percent velocity error. Furthermore, the method was extended to the full 3D case, where each focal transform sub-domain in principle contains a 5D data space. In addition to the basic focal transformation, the method can be combined with other transforms in order to increase data compression. As an example, the double focal transformation can be combined with the linear Radon transformation, such that the seismic data can be represented sparser and fewer focal operators are necessary. Satisfactory results of focal domain data reconstruction beyond aliasing on 2D and 3D synthetic and 2D field data illustrate the method’s virtues.","reconstruction; interpolation; aliasing; propagation; back-propagation; seismic; acoustic; transformation; focusing; inversion; sparseness; sampling; pre-processing; decomposition; prior knowledge; noise; effective or compressivessive data representation; hybrid transformation","en","doctoral thesis","","","","","","","","","Applied Sciences","Department of Imaging Physics","","","",""
"uuid:b574eb40-6bff-49d4-9ba2-bed4a588d850","http://resolver.tudelft.nl/uuid:b574eb40-6bff-49d4-9ba2-bed4a588d850","The Alignment of Business Model and Business Operations within Networked-Enterprise Environments","Solaimani Kartalaei, H.","Tan, Y.H. (promotor); Bouwman, W.A.G.A. (promotor)","2014","For a long time, technology has served as a silver bullet to gain a sustainable competitive advantage and to outperform the competitors. However, access to and exploitation of technologies gradually becomes a commodity, hence a less powerful resource to be leveraged to a competitive edge. Instead, companies increasingly are captivated by the charm of the Business Model concept as a way to create superior value for themselves, their customers and partners. Despite increasing attention, literature on Business Model has remained in a high-level conceptual realm, providing a rare insight into the actual implementation of Business Model and the factors that affect the feasibility of Business Model. Even less is known about the implementation of Business Model within networks of collaborating organizations. In response to the discussed conceptual gap, this research studies how the design and implementation of networked Business Models can be aligned and what factors affect the alignment.","Business Model; Business Operations; Business Processes; Business Model Implementation; Smart Living; Smart Home; Multi Case Study; Networked-Enterprise","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","Infrastructure Systems & Services","","","",""
"uuid:e7a49c06-934e-470a-8945-229dd853c394","http://resolver.tudelft.nl/uuid:e7a49c06-934e-470a-8945-229dd853c394","Timing optimization utilizing order statistics and multichannel digital silicon photomultipliers","Mandai, S.; Venialgo, E.; Charbon, E.","","2014","We present an optimization technique utilizing order statistics with a multichannel digital silicon photomultiplier (MD-SiPM) for timing measurements. Accurate timing measurements are required by 3D rangefinding and time-of-flight positron emission tomography, to name a few applications. We have demonstrated the ability of the MD-SiPM to detect multiple photons, and we verified the advantage of detecting multiple photons assuming incoming photons follow a Gaussian distribution. We have also shown the advantage of utilizing multiple timestamps for estimating time-of-arrivals more accurately. This estimation technique can be widely available in various applications, which have a certain probability density function of incoming photons, such as a scintillator or a laser source.","probability theory; stochastic processes, and statistics; photon statistics; photomultipliers; medical optics instrumentation; time-resolved imaging; laser range finder","en","journal article","Optical Society of America","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","","",""
"uuid:3d972e83-b31b-4755-9a04-cacc588ac53f","http://resolver.tudelft.nl/uuid:3d972e83-b31b-4755-9a04-cacc588ac53f","Dynamic chemical process modelling and validation: Theory and application to industrial and literature case study","Schmal, J.P.","Heijnen, J.J. (promotor); Verheijen, P.J.T. (promotor)","2014","Dynamic chemical process modelling is still largely considered an art. In this thesis the theory of large-scale chemical process modelling and validation is discussed and initial steps to extend the theory are explored. In particular we pay attention to the effect of the level of detail on the model simulation and optimisation performance. We investigate the liquid-filled tubular reactor, HIDiC and optimize the start-up from the cold state of part of a (open literature) plant. Furthermore, an industrial plant was modelled and validated for which model building times are reported.","large-scale dynamic modelling; dynamic validation; chemical process modelling; level of detail; HIDiC; start-up optimisation","en","doctoral thesis","","","","","","","","2014-01-20","Applied Sciences","Chemical Engineering","","","",""
"uuid:8f2d90ea-d476-46ab-be0d-bbd619a76dbe","http://resolver.tudelft.nl/uuid:8f2d90ea-d476-46ab-be0d-bbd619a76dbe","Application of the SES framework for model-based analysis of the dynamics of social-ecological systems","Schlüter, Maja (Stockholm University; Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB)); Hinkel, Jochen (Global Climate Forum); Bots, P.W.G. (TU Delft Policy Analysis); Arlinghaus, Robert (Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB); Humboldt-Universitat zu Berlin)","","2014","Social-ecological systems (SES) are dynamic systems that continuously change in response to internal or external pressures. A better understanding of the interactions of the social and ecological systems that drive those dynamics is crucial for the development of sustainable management strategies. Dynamic models can serve as tools to explore social-ecological interactions; however, the complexity of the studied systems and the need to integrate knowledge, theories, and approaches from different disciplines pose considerable challenges for their development. We assess the potential of Ostrom’s general SES framework (SESF) to guide a systematic and transparent process of model development in light of these difficulties. We develop a stepwise procedure for applying SESF to identify variables and their relationships relevant for an analysis of the SES. In doing so we demonstrate how the hierarchy of concepts in SESF and the identification of social-ecological processes using the newly introduced process relationships can help to unpack the system in a systematic and transparent way. We test the procedure by applying it to develop a dynamic model of decision making in the management of recreational fisheries. The added value of the common framework lies in the guidance it provides for (1) a structured approach to identifying major variables and the level of detail needed, and (2) a procedure that enhances model transparency by making explicit underlying assumptions and choices made when selecting variables and their interactions as well as the theories or empirical evidence on which they are based. Both aspects are of great relevance when dealing with the complexity of SES and integrating conceptual backgrounds from different disciplines. We discuss the advantages and difficulties of the application of SESF for model development, and contribute to its further refinement.","dynamic modeling; model development; SES framework; social-ecological modeling; social-ecological processes","en","journal article","","","","","","","","","","","Policy Analysis","","",""
"uuid:b3cb7c31-64b9-48ed-8973-b3db0e813e76","http://resolver.tudelft.nl/uuid:b3cb7c31-64b9-48ed-8973-b3db0e813e76","An Integrated Refurbishment Design Process to Energy Efficiency","Konstantinou, T.; Knaack, U.","","2013","Given the very low renewal rate of the building stock, the efforts to reduce energy demand must focus on the existing residential buildings. Even though awareness has been raised, the effect on energy efficiency is often neglected during the design phase of refurbishment projects. This paper discusses an integrated approach to energy-efficiency upgrades of residential building stock, by assessing the impact of retrofitted components in the early stages of the design. Firstly, the key building components of an integrated refurbishment are identified and various solutions are systematically organised into a “toolbox”. Moreover, a roadmap to refurbishment design was created. The proposed methodology, applied on case study buildings, resulted in improvement of the dwelling energy demand up to 80%. This approach recognises the diversity of each project, as well as the designer’s freedom to his decisions. It assists efficient choices, with respect to the specific requirements of each project. The toolbox information enables designers of refurbishment project to know in the early stages of the design the impact their choices will have.","refurbishment; energy upgrade; design process; building envelope","en","conference paper","Guarant","","","","","","","","Architecture","Architectural Engineering +Technology","","","",""
"uuid:4b123641-4fac-4958-a8db-b509fa79d685","http://resolver.tudelft.nl/uuid:4b123641-4fac-4958-a8db-b509fa79d685","Exploring the processes of generating LOD (0-2) CityGML models in greater municipality of Istanbul","Buyukaslih, I.; Isikdag, U.; Zlatanova, S.","","2013","3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e.common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.","LOD; CityGML; Istanbul; process","en","conference paper","ISPRS","","","","","","","","OTB Research Institute for the Built Environment","OTB Research","","","",""
"uuid:6af63aa3-452e-4e01-8646-f12fbfee952d","http://resolver.tudelft.nl/uuid:6af63aa3-452e-4e01-8646-f12fbfee952d","Ontology for quality specification in requirements engineering","Heidari, F.; Loucopoulos, P.; Brazier, F.M.","","2013","The field of Requirements Engineering (RE) is arguably one of the most crucial areas in the development of systems in support of organisational structures and processes. Eliciting, negotiating, analysing and validating are RE processes that rely on appropriate abstraction mechanisms. This paper focuses on a specific modelling approach, that of Business Process Modelling (BPM), and the use of a specific ontology for modelling and evaluating quality aspects of business processes. This business process ontology provides an explicit specification of the shared conceptualization and understanding of enterprises between IT and none-IT experts. Specification and measurement of requirements based on an ontology fosters communication between experts. This paper proposes an approach that drives specification and measurement of quality requirements. Application of the proposed approach is illustrated for a simplified version of a business process","ontology; quality requirements; quality specification; quality measrement; business process; businees process modelliing","en","conference paper","IARIA","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:60c3aad6-12d2-4e6c-8ae0-f5d76add32cd","http://resolver.tudelft.nl/uuid:60c3aad6-12d2-4e6c-8ae0-f5d76add32cd","Principles of landscape architecture","Nijhuis, S.","","2013","The Department of Urbanism at the Faculty of Architecture and Built Environment, TU Delft considers urbanism as a planning and design oriented activity towards urban and rural landscapes. It aims to enhance, restore or create landscapes from a perspective of sustainable development, so as to guide, harmonise and shape changes which are brought about by social, economic and environmental processes. In this respect we can consider urbanism as an object or goal-oriented interdisciplinary approach that breaks down complex problems into ‘compartments’ or ‘themes’. Landscape infrastructures is such a theme were transportation-, green-, and water infrastructure are explored as armatures for urban development. The core of urbanism is formed by the disciplines of urban planning, urban design, and landscape architecture. Giving shape to the relationship between man and natural landscape is a core task for this disciplines and involves civil-, agriculture-, nature-, and environmental based techniques as operative instruments. However, in order to work together effectively it is important to identify and develop the qualities of the involved disciplines individually. What is the particular nature of landscape architecture as an independent discipline? The presumption is that the answer can be found in a repertoire of principles of study and practice typical for landscape architecture. But before elaborating on that some backgrounds will be discussed.","landscape architecture; landscape research methods; landscape history; landscape process; scale continuum; three-dimensional landscape","en","book chapter","Mairea Libros Publishers / Delft University of Technology","","","","","","","","Architecture and The Built Environment","Urbanism","","","",""
"uuid:2b755356-0ace-4aee-a059-8cec5b664901","http://resolver.tudelft.nl/uuid:2b755356-0ace-4aee-a059-8cec5b664901","On search games that include ambush","Alpern, S.; Fokkink, R.; Gal, S.; Timmer, M.","","2013","We present a stochastic game that models ambush/search in a finite region Q which has area but no other structure. The searcher can search a unit area of Q in unit time or adopt an ""ambush"" mode for a certain period. The searcher ""captures"" the hider when the searched region contains the hider's location or if the hider moves while the searcher is in ambush mode. The payoff in this zero sum game is the capture time. Our game is motivated by the (still unsolved) princess and monster game on a star graph with a large number of leaves.","ambush strategy; noisy search game; poisson process","en","journal article","Society for Industrial and Applied Mathematics (SIAM)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:71a090f4-5c57-4a1a-a90e-aeed525f9fa2","http://resolver.tudelft.nl/uuid:71a090f4-5c57-4a1a-a90e-aeed525f9fa2","Better than Worst-Case Design for Streaming Applications under Process Variation","Mirzoyan, D.","Goossens, K.G.W. (promotor)","2013","","Embedded Systems; Process Variation; Streaming Applications; Reduced Design Margins (guard-bands)","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Engineering","","","",""
"uuid:a12fbf84-318f-4e8d-be6f-07f54f1a0d44","http://resolver.tudelft.nl/uuid:a12fbf84-318f-4e8d-be6f-07f54f1a0d44","Overcoming Methodological Obstacles in Business Process Simulation under Deep Uncertainty","Markensteijn, T.L.","","2013","Organizations are in ever changing environments which results in the need for constant adaptation of business processes and structures. Discrete Event Simulation (DES) is a commonly used application of Business Process Simulation to support decision makers in complex processes. However, in case deep uncertainty is present in a target business process environment, DES is unsuitable. This article will identify methodological obstacles in regard to applying Exploratory Modeling and Analysis (EMA) on DES based on literature analysis. The result is an overview of methodological obstacles and including suggestions on how to overcome them. This overview can function as a starting point for any practical application of EMA on DES in deeply uncertain business process environments. Future research should focus on more applications of the approach in different practical business process cases where deep uncertainty can be identified.","Discrete Event Simulation; Exploratory Modeling and Analysis; Business Process Simulation; scenario development; deep uncertainty","en","journal article","","","","","","","","","Technology, Policy and Management","Systems Engineering","","","",""
"uuid:1303993f-e41e-4239-bc28-968c049a2d06","http://resolver.tudelft.nl/uuid:1303993f-e41e-4239-bc28-968c049a2d06","Business Process Modelling for Measuring Quality","Heidari, F.; Loucopoulos, P.; Brazier, F.M.","","2013","Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as representations of business processes. The approach is presented in the form of a conceptual model and its application is demonstrated for a simplified version of a business process. However, communication becomes a challenge in crossorganizational business processes where multiple business process modelling languages are being practiced which calls for an abstraction as an integration of concepts of these business process modelling languages. In this paper, a business process integrating meta-model is presented as an abstraction of concepts of seven mainstream business process modelling languages. Attaining such level of understanding and specifying business processes fosters specification and measurement of quality requirements.","quality requirements; quality specification; quality measurement; business process; business process modeling; business process integrating meta-model","en","journal article","IARIA","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","http://resolver.tudelft.nl/uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","Averaging Level Control to Reduce Off-Spec Material in a Continuous Pharmaceutical Pilot Plant","Lakerveld, R.; Benyahia, B.; Heider, P.L.; Zhang, H.; Braatz, R.D.; Barton, P.I.","","2013","The judicious use of buffering capacity is important in the development of future continuous pharmaceutical manufacturing processes. The potential benefits are investigated of using optimal-averaging level control for tanks that have buffering capacity for a section of a continuous pharmaceutical pilot plant involving two crystallizers, a combined filtration and washing stage and a buffer tank. A closed-loop dynamic model is utilized to represent the experimental operation, with the relevant model parameters and initial conditions estimated from experimental data that contained a significant disturbance and a change in setpoint of a concentration control loop. The performance of conventional proportional-integral (PI) level controllers is compared with optimal-averaging level controllers. The aim is to reduce the production of off-spec material in a tubular reactor by minimizing the variations in the outlet flow rate of its upstream buffer tank. The results show a distinct difference in behavior, with the optimal-averaging level controllers strongly outperforming the PI controllers. In general, the results stress the importance of dynamic process modeling for the design of future continuous pharmaceutical processes.","control; process modeling; process simulation; parameter estimation; dynamic modeling; optimization; crystallization; continuous pharmaceutical manufacturing","en","journal article","MDPI","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:0f1c8773-f7e9-412e-b1e4-283e0cf33ee6","http://resolver.tudelft.nl/uuid:0f1c8773-f7e9-412e-b1e4-283e0cf33ee6","Stochastic Evolution Equations with Adapted Drift","Pronk, M.","Van Neerven, J.M.A.M. (promotor); Veraar, M.C. (promotor)","2013","In this thesis we study stochastic evolution equations in Banach spaces. We restrict ourselves to the two following cases. First, we consider equations in which the drift is a closed linear operator that depends on time and is random. Such equations occur as mathematical models in for instance mathematical finance and filtration theory. Second, we restrict ourselves to UMD Banach spaces with type 2. As the theory of Ito stochastic integration is insufficient for studying equations of this general type, we need to have a proper understanding of several extensions to the Ito integral. Two of such extensions that are considered rigorously in this thesis are the Skorohod integral and the forward integral. Moreover, in Chapter 5, a new solution concept is introduced. The relationship between other solution concepts is discussed. Finally, we prove existence, uniqueness and regularity of solutions to stochastic evolution equations with adapted drift.","Malliavin Calculus; Stochastic Partial Differential Equations; Stochastic Evolution Equations; Forward Integration; Truncated Skorohod Integral; Space-Time Regularity; UMD Banach space; path-wise mild solution; stochastic convolution; adapted drift; non-adapted processes","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:0407019d-248c-4061-b349-f9ea81085da1","http://resolver.tudelft.nl/uuid:0407019d-248c-4061-b349-f9ea81085da1","Microwave Field Applicator Design in Small-Scale Chemical Processing","Sturm, G.S.J.","Stankiewicz, A.I. (promotor); Verweij, M.D. (promotor); Stefanidis, G.D. (promotor)","2013","Ever since the first experiments nearly three decades ago, microwave enhanced chemistry has received incessant scientific attention. Many studies report improved process performance in terms of speed and conversion under microwave exposure and therefore it is recognized as a promising alternative method of process activation. It has also raised skepticism though, since the mechanisms behind the process enhancement remain unclear. Nevertheless, in the context of process intensification, the combination of microwave fields and microreactor systems has a promising quality; the enhanced reaction rates of the former and the superior heat and mass transfer rates and tightly controlled processing conditions of the latter together may provide a well-controlled and highly intensified processing environment.The objective of this thesis is to explore the possibilities to apply a microwave field in microstructured reactor systems. The familiar (domestic) multimode cavity systems are discounted as a viable means to apply a microwave field to a microreactor; the electromagnetic conditions in such systems simply are too poorly defined and controlled. In order to give each molecule the same processing experience, the field applicator needs to apply a spatially uniform microwave field. Therefore it is investigated what the theoretical limitations are on the uniformity of the electromagnetic field and heating rate distribution under parametric variation in a hypothetical resonant system. Design charts are presented that illustrate how important operating, geometric and medium parameters relate with each other. It is demonstrated how these simple configurations can provide design guidelines and first approximations for more realistic process equipment geometries. In a next step, the practical limitations encountered in commonly applied cavity systems are investigated. To this end, a simple exemplary process was analyzed both by experiment and simulation. The process under consideration is heating of water contained in a vial inside a popular, off-the-shelf, single-mode microwave cavity device. Both the heating rate distribution and the overall heating rate are investigated as well as the sensitivity of these measures to parametric variation. It is found that the resonant microwave field in generic, non-tailored systems is highly sensitive to parametric variation, that the heating process is hard to predict, and that that such systems do not lend themselves for control or optimization. Currently, the types of microwave equipment that are used in microwave chemistry research are principally limited to the aforementioned generic microwave systems. To widen this scope, the potential of standard sized, rectangular waveguides to form a basis for microwave applicator systems is explored. It is demonstrated that such systems support microwave fields that are relatively simple and predictable, which enables a higher degree of adaption and optimization to fit specific process requirements. The feasibility of long residence time continuous flow processing under microwave activation is experimentally demonstrated in a novel reactor type that the rectangular waveguide uniquely supports. Up to this point only cavity systems that support resonant fields have been considered. Resonant conditions are associated with hard-to-predict electromagnetic field patterns, difficulty in controlling and optimizing heat generation, and intrinsic spatial non-uniformity. The novel Coaxial Traveling Microwave Reactor concept is proposed as a means to address these issues by avoiding resonance altogether. Thus the highly optimized processing conditions characteristic of microreactors may be retained. Two concept variants are presented, one for liquid phase processing and one for heterogeneous gas phase catalytic reactions, respectively. A method to optimize the applicator geometry is demonstrated. The thesis is concluded by a discussion on the design principles that were identified in the course of the research and a on a framework for further development of equipment for electromagnetically enhanced chemical processing systems.","microwave fields; microstructured chemical processing; resonance; reactor design","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:b3dd0e4a-6cda-4092-9a96-676d1e1d24e8","http://resolver.tudelft.nl/uuid:b3dd0e4a-6cda-4092-9a96-676d1e1d24e8","Ultrasound imaging for quantitative evaluation of magnetic density separation","Sanaee, S.A.","Rem, P.C. (promotor); Wapenaar, C.P.A. (promotor)","2013","This thesis is dedicated to an investigation of the potential and technological possibilities of an inline ultrasound system as a quality control system for wet recycling of solid waste. The main targeted recycling technology is magnetic density separation (MDS), a novel technique that was investigated and technologically matured in a project running in parallel to this work. In MDS, the easily magnetisable ferrofluid is used as the separation medium to sort different materials based on their mass densities. The MDS is very accurate compared to conventional recycling separation techniques as it is effective even when the densities are very close (< 1 weight percent), such as for different polyolefin plastics. The special attention for plastics in this work is motivated by the economical and environmental gains that are obtained from the separation of plastics from large waste flows such as automobile scrap and household waste. Due to the inherent optical opaqueness of ferrofluid, the ultrasound imaging system is the only effective method that allows accurate observation and study of the waste particles as they separate in the channel. Moreover, the intrinsic properties of ultrasound make it suitable for quantitative analyses, such as particle tracking and measurement of particle sizes, volume and density distributions as the particles flow in large quantities towards the extraction units. The main objectives for this work have been achieved. It was shown that commercial medical 2D ultrasonic imaging systems provide a good technological point of departure for the desired inline system. They are capable of generating good quality images of moving particles, provided the view of the probe onto the particles is well controlled. Moreover, it has been shown that real-time ultrasound is capable of delivering online quantitative information that is crucial to the performance of an MDS. In particular, image processing techniques have been applied on the real-time ultrasound video-streams to evaluate the particles density distribution in the channel, measure the particle velocity and to analyze their motion behaviour as they float in the ferrofluid. The limitations of the medical commercial technology are that it cannot serve as a reliable stand-alone machine and also cannot provide all types of quantitative analyses that are desirable in an industrial recycling operation. As a next step, the investigation turned towards the imaging methods themselves. For that purpose the general linear acoustic theory for waves in ferrrofluids and acoustic imaging was reviewed, first to establish the basic physics and main principles of imaging. The potential of quantitative ultrasound analysis was determined by focusing on cross-section imaging, which is the biggest challenge for accurate 2D imaging. It has been shown that probe positioning and the overall data acquisition strategy deserve due consideration, since data quality proved paramount for good quality images. For the imaging research, the most technologically promising ultrasound methods were adapted from the fields of seismology, medical ultrasound and non-destructive testing. These imaging methods were developed in either the space-time domain or the Fourier domain, as each approach proved to have its own advantages and limitations for data requirements and computational costs. The methods were implemented in Matlab and supplied with raw ultrasound data, scanned from static scenes with just a few generic test objects. These objects were generic in the sense that all possible shapes and size-dependent acoustic wave effects were captured that could be expected with ‘real’ waste particles. Two complementary data sets were used to investigate the possible benefits of having either a wider sensor array aperture during transmission (pulse-echo data) or during data reception (plane wave data). The resulting images were evaluated in terms of performance indicators, which were introduced to obtain a more objective judgement of image quality. This research showed that accurate ultrasound cross-section imaging is quite feasible if good quality data can be scanned, i.e. if the data contain the necessary acoustic information. In particular, the availability of acoustic information from both front and back surfaces was found to be the key factor for good quality data. It is also concluded that all the imaging methods tested in this work are in principle capable of delivering good image quality, provided the data are of sufficient quality. What sets them apart are the substantial differences in computational costs and the ability to process different types of data. Finally, the research conducted in this thesis has also led to the compilation of a set of recommendations for future realization of an ultrasound system, dedicated to inline quality control in recycling.","ultrasound imaging; plastics recycling; wet processing of solid waste; quantitative acoustical measurement techniques; process control and optimization","en","doctoral thesis","","","","","","","","2013-11-04","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:e37a8243-fcc4-4384-b88b-54140f35313c","http://resolver.tudelft.nl/uuid:e37a8243-fcc4-4384-b88b-54140f35313c","Quantification of Imaging Biomarkers For Cardiovascular Disease in CT(A)","Shahzad, R.","Van Vliet, L.J. (promotor); Niessen, W.J. (promotor); Van Walsum, T. (promotor)","2013","For better management of cardiovascular disease, it is of utmost importance to categorize the subjects into different risk groups. This categorization can be made based on cardiovascular risk factors including the family history of the subject. Imaging techniques play an increasing role in order to assess ardiovascular risk factors. In this thesis we set out to develop and evaluate automatic techniques for the extraction of quantitative imaging biomarkers for coronary artery disease (CAD). One of the important cardiovascular risk factor is the presence of calcium in the arteries. We presented an automatic method that can compute the amount of calcium scores for the whole heart as well as for each of the coronary arteries from CT data. The system also categorizes patients into different risk groups. This vessel specific calcium lesion information can be used for treatment planning and assessing progression of CAD in follow up studies. The possibility to assign calcium to individual coronary arteries was possible owing to the ’Coronary Density Estimate’. The second imaging biomarker is epicardial fat volume. We resent a method that can accurately quantify the amount of epicardial fat volume. It was demonstrated that the method performs as good as the manual observers, hence has great potential to be used in daily clinical practice. In a clinical study on 2298 subjects it was demonstrated that indeed larger volumes of epicardial fat volumes were related to larger volumes of calcified lesions in the various vessel beds. The potential of this biomarker will need to be established in multiple larger studies. The third imaging biomarker in CAD considered in this thesis is coronary artery stenosis grade. Accurate detection and quantification of coronary stenoses is of great importance, as this information is very important for the clinician in order to make accurate treatment selection and planning. We investigated the ability of detecting and quantifying coronary stenoses from CTA data. We demonstrated that the vessel lumen can be segmented with a precision similar to the human observers, but that it is still a challenge to be able to distinguish between significant and non-significant lesions. Quantitative imaging biomarkers in CAD may provide both anatomical and functional information, and are often obtained from different imaging modalities. An important subject with respect to treatment planning is therefore the ability to combine information from different modalities in an integrated display. The SMARTVis system was introduced to fuse anatomical information from CTA scans and functional information from SPECT-MPI into one display. The integrated visualization proposed in the SMARTVis system enables a one-stop-shop visual exploration of cardiac anatomical and functional data, to maximally exploit the complementary information of multiple imaging modalities. It has been confirmed that such comprehensive visualizations allow to effectively relate perfusion defects and coronary lesions, and that fused integrated analysis leads to a more accurate diagnosis. Automatic image processing plays an increasingly important role. Not only to extract relevant quantitative imaging biomarkers from CT imaging data, but also establish with what accuracy they can be assessed. For a number of relevant cardiovascular quantitative imaging biomarkers, this thesis has provided the required methodology.","Cardiovascular; Image Processing; Image Segmentation; Image Registration; Risk Stratification","en","doctoral thesis","","","","","","","","","Applied Sciences","Imaging Science & Technology","","","",""
"uuid:3db45913-1662-429f-a385-ed53f5ac41fd","http://resolver.tudelft.nl/uuid:3db45913-1662-429f-a385-ed53f5ac41fd","Model Driven Development of Simulation Models: Defining and Transforming Conceptual Models into Simulation Models by Using Metamodels and Model Transformations","Küçükkeçeci Çetinkaya, D.","Verbraeck, A. (promotor)","2013","Modeling and simulation (M&S) is an effective method for analyzing and designing systems and it is of interest to scientists and engineers from all disciplines. This thesis proposes the application of a model driven software development approach throughout the whole set of M&S activities and it proposes a formal model driven development framework for modeling and simulation, which is called MDD4MS. The MDD4MS framework presents an integrated approach to bridge the gaps between different steps of a simulation study by using metamodeling and model transformations. The practical examples with the MDD4MS framework showed that the framework is applicable and useful in the business process modeling and simulation domain. This thesis mainly addresses the conceptual modeling and the simulation model development stages in the M&S lifecycle and the proposed framework can be incorporated into existing simulation methodologies for increasing the productivity, maintainability and quality of M&S projects.","model driven development; modeling and simulation; metamodeling; business process modeling; component based simulation","en","doctoral thesis","","","","","","","","2013-11-15","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:1da69808-9ed0-4a1f-8460-cf2cf431a2cd","http://resolver.tudelft.nl/uuid:1da69808-9ed0-4a1f-8460-cf2cf431a2cd","The road ahead in seismic processing","Berkhout, A.J.","","2013","The next generation seismic processing system will comprise a chain of unified algorithms, from preprocessing to reservoir characterization. All these algorithms are formulated in terms of a closed-loop estimation problem, showing a great similarity with each other. A critical module in each algorithm is forward modeling, allowing feedback between output and input ('closing the loop'). For this purpose a new wavefield modeling concept is proposed that uses for each algorithm a different choice of parameterization. Characteristic properties of the proposed closed-loop processing system are full wavefield, low complexity, high degree of automation and relatively little maintenance.","estimation; multiparameter; processing; reservoir characterization; full-waveform inversion","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:e08d77a0-977c-43ac-8c7a-a679ae8cc739","http://resolver.tudelft.nl/uuid:e08d77a0-977c-43ac-8c7a-a679ae8cc739","Dispersive multi-modal mud-roll elimination using feedback-loop approach","Ishiyama, T.","","2013","In a shallow water environment, mud-rolls are often dominant and appear as a prevailing coherent noise in OBC seismic data. Their complex properties make the noise elimination notably challenging in seismic processing. To address these challenges, we propose a dispersive multimodal mud-roll elimination method using a feedback-loop approach with a sparse inversion of focal/Radon transformation. In this paper, we illustrate the proposed method, and show some examples on synthetic seismic data to demonstrate its virtues.","OBC; noise; processing; surface wave; adaptive subtraction","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:f75429e7-103e-4193-8434-29e5594ba786","http://resolver.tudelft.nl/uuid:f75429e7-103e-4193-8434-29e5594ba786","Numerical Modeling of Rotary Kiln Productivity Increase","Romero-Valle, M.A.; Pisaroni, M.; Van Puyvelde, D.; Lahaye, D.J.P.; Sadi, R.","","2013","Rotary kilns are used in many industrial processes ranging from cement manufacturing to waste incineration. The operating conditions vary widely depending on the process. While there are many models available within the literature and industry, the wide range of operating conditions justifies further modeling work to improve the understanding of the processes taking place within the kiln. The kiln being studied in this work produces calcium aluminate cements (CAC). In a first stage of the project, a CFD empty kiln model was successfully used to counteract ring formation in the industrial partner’s rotary kiln. However, that work did not take into account the solids being processed in the kiln. The present work describes the phenomena present within the granular bed of the kiln and links it to the observed productivity increase. A validated granular bed model is developed taking into account different approaches found in the literature. Simplified sintering reaction kinetics are proposed by considering experimental X-Ray Diffraction data handed by our Industrial Partner and information reported in the literature. The combined model was use to simulate two sets of operating conditions of the kiln process taking into account the unique chemistry of the calcium aluminates. By combining the aspects of the CFD model for the gas phase and a granular bed model for the solid phase, modeling accuracy is improved and by consequence the phenomena occurring in the kiln are better understood.","rotary kilns; computational fluid dynamics; MATLAB; process modeling","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:d829cb31-f1a7-4fec-a9b3-26987757dfc9","http://resolver.tudelft.nl/uuid:d829cb31-f1a7-4fec-a9b3-26987757dfc9","Photocatalytic Oxidation in Drinking Water Treatment Using Hypochlorite and Titanium Dioxide","El-Kalliny, A.S.M.","Rietveld, L.C. (promotor)","2013","The main focus of this thesis is to study the advanced oxidation processes (AOPs) of water pollutants via UV/hypochlorite (homogeneous AOPs), and UV solar light/TiO2 (heterogeneous AOPs) in which the highly oxidative hydroxyl radicals (OH) are produced. These radicals are capable of destructing the emerging organic pollutants in water. The combined action of both OH and Cl that are produced during the NaOCl/UV processes increased the chlorination potential of humic acids (HA). In addition, at a high free-radical dose, such as in swimming pool water recirculation systems, the equal levels of adsorbable organic halogens (AOX) and CHCl3 are formed with both low pressure (LP) and medium pressure (MP), respectively. CHCl3, once formed, is not degraded with either LP or MP. Moreover, the photo-degradation of HA in LPUV/NaOCl process is higher than that for the MPUV/NaOCl process, which results in a higher initial rate of AOX and CHCl3 formation. This raised the attention to the risk of using the LPUV/NaOCl process especially at the short reaction times that are relevant for water treatment. Based on the obtained results, a fixed-bed photocatalytic reactor can be applied for a small scale drinking water purification plants. This is mainly due to that TiO2 coated by the electrophoretic deposition technique on stainless steel woven meshes fitted in layers has major advantages over the commonly used flat-plate reactor and the dispersed-phase reactor. This presents a novel reactor in the oxidation of water contaminants such as humic acids and atrazine. Up-scaling of such reactors is feasible. It is worth to highlight that the results obtained has led to an improved understanding and applications of AOPs for water treatment. The findings can be used to improve the performance of both small and large scale water purification plants.","advance oxidation process; solar reactor; TiO2 Immobilization; water purification; woven mesh substrates; hypochlorite/UV; chlorination disinfection byproducts","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:189078d4-c81e-44de-bc43-fee6c3173655","http://resolver.tudelft.nl/uuid:189078d4-c81e-44de-bc43-fee6c3173655","Model & scale as conceptual devices in architectural representation","Stellingwerff, M.C.; Koorstra, P.A.","","2013","This year we celebrate the tenth anniversary of our Computer Aided Manufacturing laboratory (CAMlab, http://www.camlab-bk.nl). From the start we provide laser cutting, CNC-milling and 3D-print facilities for the students and the researchers at the Faculty of Architecture in Delft. Over the past ten years we have delivered uncountable amounts of fabricated model parts and we have advised several thousands of students. Also, we have participated in many faculty-, museum- and world traveling exhibitions, and we have conducted many courses about model making and prototyping related to architecture and industrial design. Although we can report and show many successes in scale model making, we also noticed a number of problems, pitfalls and too many examples of rough and unarticulated scale models from students in our own workshop and elsewhere. The downsides of computer directed fabrication techniques were obvious and multiple. First and foremost, we noticed the attitude to see models as an end product. Secondly, as a consequence, there often was the un- articulated outlook, missing the human touch. Thirdly, we noticed the missing sense for scale as a conceptual device. Many models were made as if they were shrunken depictions of reality. This paper describes how we responded to these new problems.","scale model; representation; design process; architecture","en","conference paper","Politecnico di Milano","","","","","","","","Architecture and The Built Environment","Architecture","","","",""
"uuid:8e569ac0-f9f8-4499-b166-2cbc879c840c","http://resolver.tudelft.nl/uuid:8e569ac0-f9f8-4499-b166-2cbc879c840c","Human Activity Modelling Performed by Means of Use Process Ontologies","Trento, A.; Fioravanti, A.","","2013","Quality, according to Pirsigs universal statements, does not belong to the object itself, nor to the subject itself, but to both and to their interactions. In architecture it is terribly true as we have a Building Object and Users that interact with it.The problem we approach here, renouncing at the impossible task of modelling the actors libero arbitrio, focuses on defining a set of occurrences, which dynamically happen in the built environment. If organized in a proper way, use process knowledge allows planners/designers to represent usage scenario, predicting activity inconsistencies and evaluating the building performance in terms of user experience.With the aim of improving both, the quality of buildings and the user experience, this research explores a method for linking process and product ontologies, formalized to support logic synchronization between software for planning functional activities and software for authoring design of infrastructures.","design knowledge modelling; process ontology; knowledge management","en","conference paper","","","","","","","","","","","","","",""
"uuid:ec5be030-01bd-4bb6-abd2-2b904102b51e","http://resolver.tudelft.nl/uuid:ec5be030-01bd-4bb6-abd2-2b904102b51e","Automated Simulation and Study of Spatial-Structural Design Processes","","","2013","A so-called Design Process Investigation toolbox (DPI toolbox), has been developed. It is a set of computational tools that simulate spatial-structural design processes. Its objectives are to study spatial-structural design processes and to support the involved actors. Two case-studies are presented which demonstrate how to: (1) study the influence of transformation methods on design instances and (2) study the influence of transformation methods on the behavior of other transformation methods. It was found that in design instances with the same type of structural elements the influence of a specifically varied transformation method is more explicit; while, when different types are present this influence is more undetermined. It was also found that the use of two specifically different structural modification methods have little influence on the sub-sequential spatial transformation method.","design process research; design process simulation; spatial design; structural design","en","conference paper","","","","","","","","","","","","","",""
"uuid:87520512-4780-4c00-83ff-4a304ab363a8","http://resolver.tudelft.nl/uuid:87520512-4780-4c00-83ff-4a304ab363a8","A Methodology for Computational Architectural Design Based on Biological Principles","El Ahmar, S.; Fioravanti, A.; Hanafi, M.","","2013","Biomimicry, where nature is emulated as a basis for design, is a growing area of research in the fields of architecture and engineering. The widespread and practical application of biomimicry as a design approach remains however largely unrealized. A growing body of international research identifies various obstacles to the employment of biomimicry as an architectural design method. One barrier of particular note is the lack of a clear definition and methodology of the various approaches to biomimicry that designers can initially employ. This paper attempts to link biological principles with computational design in order to present a design methodology that aids interested architects within the preliminary design phase.","biomimicry; architectural design; design process; case study","en","conference paper","","","","","","","","","","","","","",""
"uuid:f0c5dde2-4b3d-47d3-92bb-7f76afa585e6","http://resolver.tudelft.nl/uuid:f0c5dde2-4b3d-47d3-92bb-7f76afa585e6","The Rehabilitation Design Process of the Bourgeois House of Oporto: Shape Grammar Simplification","Coimbra, E.; Romao, L.","","2013","This study was accomplished in the context of a broader research to be developed in an ongoing PhD program in architecture. The purpose of this study is to give a perspective of the research progress and to present a shape grammar simplification that will be improved to assist the rehabilitation design process of the bourgeois house of Oporto.The typology of the bourgeois house of Oporto, built from the late sixteenth century until the early twentieth century, is dominant in the ancient fabric of the city and in need of rehabilitation. From the analysis of a representative sample of a moment of its evolution, it is possible to verify patterns and to define rules.This first approach intends to validate the use of shape grammars as a tool, able to assist the architect in the rehabilitation design process of the bourgeois house of Oporto.","design process; rehabilitation; shape grammars","en","conference paper","","","","","","","","","","","","","",""
"uuid:6f0bb5d1-e0ef-4ae7-956e-70bdd4ccf753","http://resolver.tudelft.nl/uuid:6f0bb5d1-e0ef-4ae7-956e-70bdd4ccf753","Applying Energy Performance-Based Design in Early Design Stages","Ianni, M.; Sanchez de Leon, M.","","2013","In current architectural practice some important changes are taking place because of the development of numerous Building Performance Simulations (BPS) tools to support design decisions during early phases of the design process. Many difficulties still persist, however, not necessarily due to the limitations of the available technology, but to the lack of appropriate methodologies to use the existing tools to improve the decision making process, particularly at the early design stages. In this work we present an application of performance-based design in early design phases, with the purpose to take better-informed decisions which would ultimately contribute to improve the energy performance of buildings.","energy performance-based design; design methodology; design decision-making process; building energy efficiency; building performance simulation tools","en","conference paper","","","","","","","","","","","","","",""
"uuid:84397f5b-5482-466e-b5e7-3e95969176e5","http://resolver.tudelft.nl/uuid:84397f5b-5482-466e-b5e7-3e95969176e5","Best Practices for Urban Densification: A decision-making support process using microclimate analysis methods and parametric models for optimizing urban climate comfort","Pedraza, E.T.; Kunze, A.; Roccasalva, G.; Schmitt, G.","","2013","This paper presents an approach for microclimate aware densification of urban areas by creating best practices for an in situ application for block-size urban developments. The discussed procedure generates and evaluates urban block types according to microclimate criteria by integrating climate and comfort parameters in the design process of existing urban areas. It supports urban designers by generating design strategies that aim for climate, comfort and spatial as well as for urban design qualities. To achieve this, a multi-step method with different analysis and research processes has been set up. At the end, a parametric envelope tool was created for a local case study area by incorporating pre-defined design strategies built on previous investigations as urban design strategies. It is expected that this envelope tool can be transferred to similar urban development activities and guide microclimatic versus densification trade-offs. The presented approach can be applied from street canyon to block size urban situations.","urban design; parametric modelling; analysis tools; strategic densification; microclimate evaluation; decision-support tools; decision-making process","en","conference paper","","","","","","","","","","","","","",""
"uuid:908579b3-c98b-4bbd-8167-7bdf61535252","http://resolver.tudelft.nl/uuid:908579b3-c98b-4bbd-8167-7bdf61535252","Day-to-day origin-destination tuple estimation and prediction with hierarchical bayesian networks using multiple data sources","Ma, Y.; Kuik, R.; Van Zuylen, H.J.","","2013","Prediction of traffic demand is essential, either for an understanding of the future traffic state or so necessary measures can be taken to alleviate congestion. Usually, an origin-destination (O-D) matrix is used to represent traffic demand between two zones in transportation planning. Vehicles are assumed to be homogeneous; the trips of each vehicle are examined separately. This traditional O-D matrix lacks a behavioral basis and trip-based model structure. Another research stream of travel activity-based research addresses individual travel behaviors. This stream addresses the trip chain for travelers, but the research scope is attributes of trips, which ignores the road network. The concept of the O-D tuple, a sequence of dependent O-D pairs, is proposed for linking these two fields and for predicting traffic demand better. Through advanced monitoring systems that identify and track vehicles in the road network, the additional uncertainties of O-D tuples can be mitigated and thus reduce the underspecification more specifically. The hierarchical Bayesian networks mechanism in Gaussian space with multiprocesses is proposed for gaining the posterior of uncertain parameters. The model includes level and trend components for predicting future traffic volumes. A case study demonstrates that the proposed method can predict demand, and the path flow from cameras can reduce uncertainty in the estimation and prediction process, especially for O-D tuples.","Origin Destination Tuple; Hierarchical Bayesian Networks; multi-process; 21 demand prediction; Multiple Data Sources","en","journal article","National Academy of Sciences","","","","","","","","Civil Engineering and Geosciences","Transport and Planning","","","",""
"uuid:c6e5f05f-fb0f-4c43-84ff-9dcbfbfe52e3","http://resolver.tudelft.nl/uuid:c6e5f05f-fb0f-4c43-84ff-9dcbfbfe52e3","The sand engine: A solution for the Dutch Delta in the 21st century?","Stive, M.J.F.; De Schipper, M.A.; Luijendijk, A.P.; Ranasinghe, R.W.M.R.J.B.; Aarninkhof, S.","","2013","The Netherlands’ strategy to combat coastal erosion since 1990 has been through nourishment, initially as beach nourishments but more and more as shoreface nourishments. In the light of sea level rise projections the yearly nourishment magnitudes continue to increase. In view of this an innovative soft engineering intervention, comprising an unprecedented 21 Mm3 sand nourishment known as the Sand Engine, has recently been implemented in the Netherlands. The Sand Engine nourishment is a pilot project to test the effectiveness and efficiency of a local mega-nourishment as a measure to account for the anticipated increased coastal recession in this century. The proposed concept, a single mega-nourishment, once every 20 years, is expected to be more efficient and effective in the long term than traditional beach and shoreface nourishments, presently being used at the Dutch coast with typically a three to five year interval. While the judgement is still out on this globally unique intervention, if proven successful, it may well become a generic solution for combating sea level rise driven coastal recession on open and vulnerable coasts.","nourishment; coastal erosion; sea level rise; storm erosion; shoreface processes; flooding; sand engine","en","conference paper","IAHR","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:cea4ca25-a845-4192-b4fd-39bb373ffc54","http://resolver.tudelft.nl/uuid:cea4ca25-a845-4192-b4fd-39bb373ffc54","Applying experience reports in design education: Challenges and ideas","Pasman, G.J.; Romero Herrera, N.A.","","2013","What if both design students and design tutors could have real-time insights into how students actually experience their design process rather than after-the-fact reflections? And what if these insights could be applied in ways that would contribute to more in-depth learning experiences? This paper addresses these questions in describing the initial steps in the development of an application that captures a student’s experience of a design process over time, by means of self-reporting. The application makes use of the method of experience sampling (ESM), which is frequently applied in psychological research to collect experiential data in natural settings and over a long period of time in order to understand people’s behaviour. The paper discusses the challenges of implementing ESM into an educational design context and presents possible ideas to overcome these challenges.","design process; experience sampling; experience reports; EPDE","en","conference paper","Dublin Institute of Technology (DIT)","","","","","","","","Industrial Design Engineering","Industrial Design","","","",""
"uuid:061df0cc-6237-4c4a-b635-0443b592a88b","http://resolver.tudelft.nl/uuid:061df0cc-6237-4c4a-b635-0443b592a88b","Socio-cultural dimensions to sharpen designer's cultural eyeglasses","Van Boeijen, A.G.C.","","2013","This paper answers the question, how the dimensions that have been developed by anthropologists to typify cultures, can support designers in user-centred design processes. An analysis and evaluation of the use of cultural dimensions in design projects was performed. Although many of the dimensions found in the literature appear relevant for designers, the theories ‘as is’ often lead designers astray. On the basis of this, a set of socio-cultural dimensions was compiled: hierarchy, identification, time, aim, gender, space, attitude, expression, truth and ‘the ones we do not know yet’. This set is proposed as a means to help designers to generate culture specific research questions, tune design methods to the local cultural context and to generate ideas for vision and concept development.","socio-cultural dimensions; cultural dimensions; design education; design process; EPDE","en","conference paper","Dublin Institute of Technology (DIT)","","","","","","","","Industrial Design Engineering","Industrial Design","","","",""
"uuid:10ef9238-8da6-4c59-a209-374201952026","http://resolver.tudelft.nl/uuid:10ef9238-8da6-4c59-a209-374201952026","Making explicit in design education: Generic elements in the design process","Van Dooren, E.J.G.C.; Boshuizen, E.; Van Merrienboer, E.; Asselbergs, M.F.; Van Dorst, M.J.","","2013","In general, designing is conceived as a complex, personal, creative and open-ended skill. Performing a well-developed skill is mainly an implicit activity. In teaching, however, it is essential to make explicit. Learning a complex skill like designing is a matter of doing and becoming aware how to do it. For teachers and students therefore, it will be helpful to make the design process explicit. In this paper, a conceptual framework is developed to be more explicit about the design process. Based on research of the design process, on differences between novices and expert designers, and on personal experience in design education practice, five generic elements in the design process are distinguished: (1) experimenting or exploring and deciding, (2) guiding theme or qualities, (3) domains, (4) frame of reference or library, (5) laboratory or (visual) language. These elements are generic in the sense that they are main aspects and always present in the complex, personal, creative and open-ended design process.","design process; generic elements; design education; making explicit","en","journal article","Springer","","","","","","","","Architecture and The Built Environment","Architectural Engineering +Technology","","","",""
"uuid:e0ff9d05-ae08-41ab-9f63-4769c0da2f28","http://resolver.tudelft.nl/uuid:e0ff9d05-ae08-41ab-9f63-4769c0da2f28","Infrastructure Management: Dynamic control of assets","Verlaan, J.G.; Schoenmaker, R.","","2013","The infrastructure in the Netherlands is crucial for economic development on a national scale. Dramatic increases of transport and mobility accelerate ageing of infrastructure. The GNP of the Netherlands is strongly related to transport and to the two main ports (Port of Rotterdam and Amsterdam Airport Schiphol). The Netherlands is used to a high standard of infrastructure and expectations of the Dutch are that this will continue. But in the public mind new capital works are predominating and renewal of existing infrastructure is taken for granted. This paper focuses on the maintenance and renewal of existing infrastructure. The economic growth and finance conditions, that gave rise to its initial development, has changed and financing of renewal and acquisition of new projects needs to be accomplished in a new and more complex economic climate. In order to provide a reliable, well-manufactured infrastructure, which satisfies public expectations, planning of the necessary activities should be carried out on tactical as well as strategical levels. The research is based on systems theory, and conditions for effective control are developed. The conceptual model is validated in real life cases in the Netherlands. The result of the research could be used as a framework for controlled, tactical asset management processes. It involves the application of detailed asset management processes, procedures and standards. This allows development of sub-plans for the allocation of natural, physical and financial resources, which may serve to achieve strategic goals by meeting defined levels of service.","infrastructure; assets; control; process; maintenance; traffic load; valuation; accounting standards","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:a637d508-cc17-4184-b7a8-9c2e9c7de321","http://resolver.tudelft.nl/uuid:a637d508-cc17-4184-b7a8-9c2e9c7de321","Effect of noise in blending and deblending","Berkhout, A.J.; Blacquière, G.","","2013","If simultaneous shooting is carried out by incoherent source arrays, being the condition of blended acquisition, the deblending process generates shot records with a very low residual interference (blending noise). We found, theoretically and numerically, that deblended shot records had a better background-related signal-to-noise ratio than shot records in unblended surveys. This improvement increased with increasing blending fold and decreasing survey time. An interesting consequence of this property is that blended surveys can be carried out under more severe noise conditions than unblended surveys. It is advisable to optimize the survey time in areas with a large background noise level or in areas with severe environmental restrictions.","acquisition; noise; signal processing; processing; coherence","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:cd0bef2e-8863-4732-8765-663ed16f8ed9","http://resolver.tudelft.nl/uuid:cd0bef2e-8863-4732-8765-663ed16f8ed9","On the performance of a 2D unstructured computational rheology code on a GPU","Pereira, S.P.; Vuik, K.; Pinho, F.T.; Nobrega, J.M.","","2013","The present work explores the massively parallel capabilities of the most advanced architecture of graphics processing units (GPUs) code named “Fermi”, on a two-dimensional unstructured cell-centred finite volume code. We use the SIMPLE algorithm to solve the continuity and momentum equations that was fully ported to the GPU. The benefits of this implementation are compared with a serial implementation that traditionally runs on the central processing unit (CPU). The developed codes were assessed with the bench-mark problems of Poiseuille flow, for Newtonian and generalized Newtonian fluids, as well as by the lid-driven cavity and the sudden expansion flows for Newtonian fluids. The parallel (GPU) code accelerated the resolution of those three problems by factors of 19, 10 and 11, respectively, in comparison with the corresponding CPU single core counterpart. The results are a clear indication that GPUs are and will be useful in the field of computational fluid dynamics (CFD) for rheologically simple and complex fluids.","cavitation; computational fluid dynamics; finite volume methods,; graphics processing units; parallel architectures; Poiseuille flow; rheology","en","conference paper","American Institute of Physics","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:d3ad715d-52e0-46d4-bd64-11e38da5e90a","http://resolver.tudelft.nl/uuid:d3ad715d-52e0-46d4-bd64-11e38da5e90a","Planar velocity & concentration measurements in a magnetic micromixer with interface front detection","Ergin, F.G.; Watz, B.B.; Erglis, K.; Cebers, A.","","2013","Mixing is often a challenge in small scales and substantial research effort is focused on designing high performance micromixers. Active micromixers use various forces to enhance mixing efficiency. Among these, magnetic forces are often preferred as they are non-contact and do not require manufacturing of small moving parts in the microchannel. Laser-based diagnostic tools have great potential in providing multi-parameter information in microfluidics research on mixing. In this work, we extract velocity, concentration and interface front information from a single image pair from a magnetic micromixer undergoing labyrinthine (fingering) instability. The experiments were performed using a MicroPIV system with stroboscopic LED illumination. Velocity information from particle displacements are computed using Least Squares Matching (LSM) and compared with previously published results using Adaptive Cross Correlation (ACC). It turns out that LSM is less sensitive to image contrast; and able to extract most of the useful velocity information from the raw images compared to the processed images. This makes LSM an important global tool for PIV analysis where image pre-processing can be avoided completely, for example in industrial mixing applications. The use of image processing functions proves to be essential in multi-parameter microfluidics: Concentration measurements are performed using absorption imaging after removal of particles using a series of low-pass filters. Results for interface front detection using various other image processing functions are also presented.","MicroPIV, magnetic micromixer, image processing, labyrinthine (fingering) instability, local contrast; normalization (LCN), difference of Gaussian (DoG) filter, absorption imaging, front detection","en","conference paper","","","","","","","","","","","","","",""
"uuid:4e19cad3-0a89-483a-8a72-eb4f1b083e24","http://resolver.tudelft.nl/uuid:4e19cad3-0a89-483a-8a72-eb4f1b083e24","The Pattern Book: European Masters in Urbanism TUDelft 2013","Van Dorst, M.","Deshmukh, A. (contributor); Chranioti, A. (contributor); Überbacher, A. (contributor); Sanna, A. (contributor); Hao, F. (contributor); Câmara, C. (contributor); Bobkova, J. (contributor); Sundermann, K. (contributor); Carvalho, L. (contributor); Zhang, M. (contributor); Nagels, M. (contributor); Koshy, M. (contributor); Gupta, R. (contributor); Xiao, S. (contributor)","2013","This book gives an overview of the patterns developed in the EMU course AR9210 'The Sustainable City - Theories on Urban Design'.","patterns; urban design process; European Masters in Urbanism","en","book","","","","","","","","","Architecture and The Built Environment","Urbanism","","","",""
"uuid:819eded6-4e65-42c4-9f44-ab725976fe20","http://resolver.tudelft.nl/uuid:819eded6-4e65-42c4-9f44-ab725976fe20","The IPG BAR project","Mentink, B.; Henriquez, L.; Van Niekerk, L.; Verheul, R.","Van Timmeren, A. (contributor); Stolk, E. (contributor); Kleijn, R. (contributor)","2013","The booklets 'Making Patterns' and 'Using Patterns' describe two aspects of the the pattern method within the urban design process. The accompanying IPG BAR Pattern Library is a small pattern library of urban-airport symbiosis patterns. These documents are the result of the Interdisciplinary Project Group “Better Airport Regions” MSc Industrial Ecology, Delft University of Technology & Leiden University.","patterns; urban design process; urban-airport symbiosis","en","book","","","","","","","","","Architecture and The Built Environment","Urbanism","","","",""
"uuid:796cf103-8093-4098-8ada-cabd3332bdd3","http://resolver.tudelft.nl/uuid:796cf103-8093-4098-8ada-cabd3332bdd3","Input reduction for long-term morphodynamic simulations","Walstra, D.J.R.; Ruessink, G.; Hoekstra, R.; Tonnon, P.K.","","2013","Input reduction is imperative to long-term (> years) morphodynamic simulations to avoid excessive computation times. Here, we discuss the input-reduction framework for wave-dominated coastal settings introduced by Walstra et al. (2013). The framework comprised 4 steps, viz. (1) the selection of the duration of the original (full) time series of wave forcing, (2) the selection of the representative wave conditions, (3) the sequencing of these conditions, and (4) the time span after which the sequence is repeated. In step (2), the chronology of the original series is retained, while that is no longer the case in steps (3) and (4). The framework was applied to two different sites (Noordwijk, Netherlands and Hasaki, Japan) with multiple nearshore sandbars but contrasting long-term offshore-directed behavior: at Noordwijk the offshore migration is gradual and not coupled to individual storms, while at Hasaki the offshore migration is more episodic, and wave chronology appears to control long-term evolution. The performance of the model with reduced wave climates was referenced to a simulation with the actual (full) wave-forcing series. It was demonstrated that input reduction can dramatically affect long-term predictions, even to such an extent that the main characteristics of the offshore bar cycle are no longer reproduced. This was particularly the case at Hasaki, where all synthetic series that no longer capture the initial chronology (steps 3 and 4) lead to rather unrealistic long-term simulations. At Noordwijk, synthetic series can result in realistic behavior, provided that the time span after which the sequence is repeated is not too large; the reduction of this time span has the same positive effect on the simulation as increasing the number of selected conditions in step 2. It was further demonstrated that, although storms result in the largest morphological change, conditions with low to intermediate wave energy must be retained to obtain realistic long-term sandbar behavior. The input-reduction framework must be applied in an iterative fashion as to obtain a reduced wave climate that simulates long-term sandbar sufficiently accurately within an acceptable computation time. Given its potential huge impact on the actual simulation, we believe it is imperative to consider input reduction as an intrinsic part of model set-up, calibration and validation.","input reduction; morphodynamic modeling; process based modeling; cyclic bar behavior; Unibest-TC; morphodynamic upscaling","en","conference paper","Bordeaux University","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:612432d4-62a6-451b-8476-3a48859dea35","http://resolver.tudelft.nl/uuid:612432d4-62a6-451b-8476-3a48859dea35","Influence of profile features on longshore sediment transport","Mil Homens, J.P.; Ranasinghe, R.W.M.R.J.B.; Van Thiel de Vries, J.S.M.; Stive, M.J.F.","","2013","Longshore sediment transport (LST) is one of the main drivers of beach morphology. Bulk LST formulas are routinely used in coastal management/engineering studies to assess LST rates and gradients. However, there is still great uncertainty in LST estimation with these bulk formulas. This uncertainty may have two sources: 1) experimental errors in the measured values and 2) the effect of parameters that are not part of the formulas. In this study, we attempt to find the influence of profile related features in the LST rates that are not accounted for in the bulk formulas. These features may influence the type of wave breaking. A process-based model (UNIBEST-LT) is used to calculate LST rates on a large number of profiles measured on the Dutch coast, all forced with the same realistic wave climate. We found that the LST rates vary with the profiles. The value corresponding to the 95th percentile of the resulting distribution is 50% higher than one correspondent to the 5th percentile. The root mean square downward slope parameter showed the best correlation with LST rates.","longshore sediment transport; process based models; UNIBEST-LT; cross-shore profile features","en","conference paper","Bordeaux University","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:3d2037f0-001b-4426-8c0e-463df9dd78ed","http://resolver.tudelft.nl/uuid:3d2037f0-001b-4426-8c0e-463df9dd78ed","The sand engine: A solution for vulnerable deltas in the 21st century?","Stive, M.J.F.; De Schipper, M.A.; Luijendijk, A.P.; Ranasinghe, R.W.M.R.J.B.; Van Thiel De Vries, J.S.M.; Aarninkhof, S.; Van Gelder-Maas, C.; De Vries, S.; Henriquez, M.; Marx, S.","","2013","The Netherlands’ strategy to combat coastal erosion since 1990 has been through nourishment, initially as beach nourishments but more and more as shoreface nourishments. In the light of sea level rise projections the yearly nourishment magnitudes continue to increase. In view of this an innovative soft engineering intervention, comprising an unprecedented 21 Mm3 sand nourishment known as the Sand Engine, has recently been implemented in the Netherlands. The Sand Engine nourishment is a pilot project to test the effectiveness and efficiency of a local meganourishment as a measure to account for the anticipated increased coastal recession in this century. The proposed concept, a single mega-nourishment, once every 20 years, is expected to be more efficient and effective in the long term than traditional beach and shoreface nourishments, presently being used at the Dutch coast with typically a three to five year interval. While the judgement is still out on this globally unique intervention, if proven successful, it may well become a generic solution for combating sea level rise driven coastal recession on open and vulnerable coasts.","nourishment; coastal erosion; sea level rise; storm erosion; shoreface processes; flooding; sand engine","en","conference paper","Bordeaux University","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:de5fa1a7-3fb3-4797-ac0e-ad822780ab87","http://resolver.tudelft.nl/uuid:de5fa1a7-3fb3-4797-ac0e-ad822780ab87","Hydrogen termination of CVD diamond films by high-temperature annealing at atmospheric pressure","Seshan, V.; Ullien, D.; Castellanos-Gomez, A.; Sachdeva, S.; Murthy, D.H.K.; Savenije, T.J.; Ahmad, H.A.; Nunney, T.S.; Janssens, S.D.; Haenen, K.; Nesládek, M.; Van der Zant, H.S.J.; Sudhölter, E.J.R.; De Smet, L.C.P.M.","","2013","A high-temperature procedure to hydrogenate diamond films using molecular hydrogen at atmospheric pressure was explored. Undoped and doped chemical vapour deposited (CVD) polycrystalline diamond films were treated according to our annealing method using a H2 gas flow down to ?50 ml/min (STP) at ?850?°C. The films were extensively evaluated by surface wettability, electron affinity, elemental composition, photoconductivity, and redox studies. In addition, electrografting experiments were performed. The surface characteristics as well as the optoelectronic and redox properties of the annealed films were found to be very similar to hydrogen plasma-treated films. Moreover, the presented method is compatible with atmospheric pressure and provides a low-cost solution to hydrogenate CVD diamond, which makes it interesting for industrial applications. The plausible mechanism for the hydrogen termination of CVD diamond films is based on the formation of surface carbon dangling bonds and carbon-carbon unsaturated bonds at the applied tempera-ture, which react with molecular hydrogen to produce a hydrogen-terminated surface.","annealing; atmospheric pressure; CVD coatings; dangling bonds; diamond; elemental semiconductors; high-temperature effects; hydrogenation; oxidation; photoconductivity; plasma materials processing; reduction (chemical); semiconductor growth; semiconductor thin films; wetting","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","","","","",""
"uuid:bfc36534-6546-4173-be20-c08d0aff2ee7","http://resolver.tudelft.nl/uuid:bfc36534-6546-4173-be20-c08d0aff2ee7","Adaptive and Sequential Gridding Procedures for the Abstraction and Verification of Stochastic Processes","Esmaeil Zadeh Soudjani, S.; Abate, A.","","2013","This work is concerned with the generation of finite abstractions of general state-space processes to be employed in the formal verification of probabilistic properties by means of automatic techniques such as probabilistic model checkers. The work employs an abstraction procedure based on the partitioning of the state-space, which generates a Markov chain as an approximation of the original process. A novel adaptive and sequential gridding algorithm is presented and is expected to conform to the underlying dynamics of the model and thus to mitigate the curse of dimensionality unavoidably related to the partitioning procedure. The results are also extended to the general modeling framework known as stochastic hybrid systems. While the technique is applicable to a wide arena of probabilistic properties, with focus on the study of a particular specification (probabilistic safety, or invariance, over a finite horizon), the proposed adaptive algorithm is first benchmarked against a uniform gridding approach taken from the literature and finally tested on an applicative case study in Biology.","general state-space processes; Markov chains; stochastic hybrid systems; abstractions; approximations; formal verification; safety and invariance; properties and specifications","en","journal article","Society for Industrial and Applied Mathematics (SIAM)","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:10cf020f-6c1f-42a4-8442-85faee68f4f2","http://resolver.tudelft.nl/uuid:10cf020f-6c1f-42a4-8442-85faee68f4f2","Design of polymeric capsules for autonomous healing of cracks in cementitious materials","Hilloulin, B.; Van Tittelboom, K.; Gruyaert, E.; Loukili, A.; De Belie, N.","","2013","Now, most of the capsules used to contain polymeric healing agents in self-healing concrete, are made of glass. However, glass capsules cannot be mixed in concrete and are therefore placed manually into the moulds during concrete casting in laboratory tests. This represents a major drawback for an eventual industrialisation. In this study, polymeric capsules were designed to meet three requirements: breakage upon crack appearance, compatibility with the polymeric healing agent and survival during concrete mixing. Three different polymers with a low glass transition temperature (Tg) were selected (PLA PS P(MMA-n-BMA)). These polymers are brittle at 20°C, and consequently have the possibility to break upon crack appearance, but are rubbery above their glass transition temperature and, consequently, can survive mixing upon heating. Differential Scanning Calorimetry and Dynamic Mechanical Analysis were performed to define the glass transition temperature of the selected polymers and to quantify the evolution of their mechanical properties with increasing temperature. Concrete mixing tests were performed both at 20°C and at a temperature above the Tg of the capsules. Mixing at increased temperature was done by previously heating the capsules and the concrete components. The survival rates increased drastically when the capsules and the concrete components were heated. Even capsules with a thin wall (thickness 0.4 mm) resisted a 2 minute concrete mixing process, whereas none of them survived at 20°C. In addition, the compatibility of the capsules with a two-component polyurethane healing agent was studied. The pre-polymer hardened after some days. This research revealed that suitable design of polymeric capsules can help to meet the requirements for self-healing concrete even though further research is needed before a possible use in industry.","concrete; autonomous healing; polymeric capsules; concrete mixing; process","en","conference paper","","","","","","","","","","","","","",""
"uuid:40f3749d-a312-46a4-b005-81120e25807c","http://resolver.tudelft.nl/uuid:40f3749d-a312-46a4-b005-81120e25807c","Flexible supramolecular matrix based self-healing composites","Sordo, F.; Michaud, V.","","2013","Supramolecular polymers gained a great success in the last years as self-healing materials and many different systems have been developed. These polymers combine the advantages of intrinsic and autonomic self-healing systems. In 2010, Montarnal et al. developed a class of epoxy-based hybrid networks that combine both chemical and supramolecular hydrogen-bonding crosslinks and that are characterized by self-healing properties. These polymers are moreover a priori compatible with composite processing techniques, such as vacuum infusion. For this reason the development of composite materials including the self-healing ability of this type of matrix represent a promising way forward. In this poster, we will present the development of supramolecular matrix based selfhealing composite materials. A supramolecular network with 50% of epoxy crosslinks will be used as matrix material, and glass fibers as reinforcements. In particular, attention will be focused on the set up of the composite processing window, and on the self-healing behavior of the obtained materials themselves as well as at the interface between the matrix and the reinforcement. The latter will be studied through pull-out tests on single fiber model composites. This PhD research work is part of the SHeMat project ""Training Network for Self- Healing Materials: from Concepts to Market"", a training and research network funded within the scope of the Seventh Framework Programme by the European Commissions Marie Curie programme.","flexible composite; self-healing; supramolecular system; adhesion study; processing","en","conference paper","","","","","","","","","","","","","",""
"uuid:889bfddf-82f0-45e8-a6c8-e7c6df060c66","http://resolver.tudelft.nl/uuid:889bfddf-82f0-45e8-a6c8-e7c6df060c66","Self-healing phenomena in polymers based on the theory of porous media","Specht, S.; Bluhm, J.; Schröder, J.","","2013","Self healing materials are becoming more and more important for the construction of mechanical components due to their ability to detect and heal failures as well as cracks autonomously. Especially in polymers and polymer-composites, where the component can loose a high rate of strength and durability due to micro cracks, those damages are nearly impossible to repair from outside. Thus, self healing ability is a very effective approach to extend the lifetime of polymer-made components. In view of the numerical simulation of such self healing effects we consider the microencapsulation approach [1] and develop a thermodynamically consistent macroscopic 5-phase model within the framework of the Theory of Porous Media (TPM) [2]. The model consists of the following different phases: solid, liquid, healed material, gas, and catalysts. The increase of damage, which is represented by the gas phase, is driven by a damage evolution equation. Furthermore, a mass exchange between the liquid-like healing agents and the solid-like healed material, i.e., the change of the aggregate state from liquid healing to solid healed material, describes the healing process. The onset of the healing process is associated with the break open of the microcapsules in connection with the subsequent motion of the liquid healing agents. Numerical examples of the simulation of healing processes in polymers and polymer-composites are presented in order to show the applicability of the model.","multiphase systems; theory of porous media; phase transition; healing processes","en","conference paper","","","","","","","","","","","","","",""
"uuid:cbc70978-62cd-44cb-822d-419387427efa","http://resolver.tudelft.nl/uuid:cbc70978-62cd-44cb-822d-419387427efa","Making curricular change: Case report of a radical reconstruction proces","Kamp, A.; Klaassen, R.","","2013","Educational change is technically relatively simple but socially complex. Making effective change in engineering curricula is problematic and often fails by too high ambitions, too short development time frames, inconsistent design and a lack of a systems approach, but also by poor leadership, lack of ownership and low faculty engagement. Literature tells that typically only 30% of the original objectives of an intended curriculum change are achieved in the as-built programme. In the period 2006-2010 TU Delft Faculty of Aerospace Engineering has reestablished the profile of the bachelor and made a radical reconstruction by recalibrating the content and introducing a state-of-the-art active teaching approach. The innovative bachelor educates tomorrow’s engineers in the context of conception, design, implementation and operation of aircraft and spacecraft systems and processes. The paper gives an inside look in the reconstruction process. It shows that curriculum change is engineering and not science; it is politics and not always rational. The paper starts with an update of the educational vision that resulted in the prime objectives of change. It follows the systems approach with the student as the user and co-producer of the education always in mind. It addresses the design and development plan of the reconstruction, its organisation and leadership, and the role of upper management. They change over time and depend on the phase of development.","educational change; curricular change; education reform; change process; colour-thinking","en","conference paper","CDIO","","","","","","","","Aerospace Engineering","Support Aerospace Engineering","","","",""
"uuid:c7e37777-eb7a-4e5f-b0e3-ec93f68a655d","http://resolver.tudelft.nl/uuid:c7e37777-eb7a-4e5f-b0e3-ec93f68a655d","Circular Urban Systems: Moving Towards Systems Integration","Vernay, A.B.H.","De Bruijn, J.A. (promotor); Mulder, K.F. (promotor)","2013","Today, most cities function linearly. One way to improve their environmental performance is to make a transition from linear to Circular Urban Systems (CUS) so that part of the waste streams becomes valorised locally. This does not only require technical but also organisational and institutional changes. Moreover it implies that links have to be created between systems that were previously separate (wastewater treatment and transport for instance). This is called systems integration. This research aims to better understand how systems integration comes about. To be able to reconstruct processes of systems integration, a conceptual framework is developed that combines Actor-Network Theory with insights from structuration theory. A method is also suggested that helps visualising integration processes. These are then applied to the study of three urban areas: EVA-Lanxmeer in Culemborg (NL), Hammarby Sjöstad in Stockholm (SE) and Lille Métropole (FR). In total ten attempts at systems integration are analysed. Researchers earlier pointed out that systems integration could potentially lock systems in a sub-optimal situation. This study however shows that systems integration can foster innovation and open up new technological pathways. Moreover, this research highlights that systems integration often faces barriers due to the structural incompatibilities of the systems involved. Creating bridging systems can be a way to lift these incompatibilities. It also shows that the presence of an organisation able to provide inter-sectorial coordination is often necessary. Finally, results indicate that maintaining a high level of autonomy is conducive to systems integration.","systems integration; industrial ecology; innovation processes; sustainable urban development","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:8594fe0e-f359-426c-8cb6-271bff80cc15","http://resolver.tudelft.nl/uuid:8594fe0e-f359-426c-8cb6-271bff80cc15","Efficient Pricing of European-Style Asian Options under Exponential Lévy Processes Based on Fourier Cosine Expansions","Zhang, B.; Oosterlee, C.W.","","2013","We propose an efficient pricing method for arithmetic and geometric Asian options under exponential Lévy processes based on Fourier cosine expansions and Clenshaw–Curtis quadrature. The pricing method is developed for both European style and American-style Asian options and for discretely and continuously monitored versions. In the present paper we focus on the European-style Asian options. The exponential convergence rates of Fourier cosine expansions and Clenshaw–Curtis quadrature reduces the CPU time of the method to milliseconds for geometric Asian options and a few seconds for arithmetic Asian options. The method’s accuracy is illustrated by a detailed error analysis and by various numerical examples.","arithmetic Asian options; exponential Lévy asset price processes; Fourier cosine expansions; ClenshawCurtis quadrature; exponential convergence","en","journal article","Society for Industrial and Applied Mathematics (SIAM)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:ffd317ac-32a4-4fe4-a183-8590aefe34e8","http://resolver.tudelft.nl/uuid:ffd317ac-32a4-4fe4-a183-8590aefe34e8","The central role of the construction sector for climate change adaptions in the built environment","Roders, M.J.; Straub, A.; Visscher, H.J.","","2013","Over the past years, research has clearly enunciated the necessity of adaptation to climate change in the built environment. Policy is being developed on national and municipal levels to have adaptations implemented. However, for the actual application of the measures, property owners are the actors that have to commission the construction industry to take action. But the construction sector is highly fragmented, causing several barriers for an easy uptake of measures other than the ‘business as usual’ ones. Based on rehabilitation intervention processes where technical measures are applied to dwellings of a housing association in the Netherlands, a governance approach for implementing adaptation measures is explored that focuses on collaboration in the construction process. In the proposed approach actors are working closely together, guided by elements of network governance. By not only integrating the complete supply chain, but also making it ‘intelligent and aware’, climate adaptation is no longer a surplus to the process, but reflected in any decision.","adaptation measures; climate change; construction process; networks","en","conference paper","International Council for Research and Innovation in Building and Construction (CIB)","","","","","","","","OTB Research Institute for the Built Environment","OTB Research","","","",""
"uuid:c287e110-3507-4f31-8763-e1d5494d1a00","http://resolver.tudelft.nl/uuid:c287e110-3507-4f31-8763-e1d5494d1a00","Human motion classification using a particle filter approach: Multiple model particle filtering applied to the micro-Doppler spectrum","Groot, S.; Harmanny, R.; Driessen, H.; Yarovoy, A.","","2013","In this article, a novel motion model-based particle filter implementation is proposed to classify human motion and to estimate key state variables, such as motion type, i.e. running or walking, and the subject’s height. Micro-Doppler spectrum is used as the observable information. The system and measurement models of human movements are built using three parameters (relative torso velocity, height of the body, and gait phase). The algorithm developed has been verified on simulated and experimental data.","radar signal processing and system modeling; radar applications","en","journal article","European Microwave Association","","","","","","","2014-04-23","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","","",""
"uuid:fc7624b7-550d-4a6b-bd54-e95f3ca8c9ec","http://resolver.tudelft.nl/uuid:fc7624b7-550d-4a6b-bd54-e95f3ca8c9ec","3D imaging by fast deconvolution algorithm in short-range UWB radar for concealed weapon detection","Savelyev, T.; Yarovoy, A.","","2013","A fast imaging algorithm for real-time use in short-range (ultra-wideband) radar with synthetic or real-array aperture is proposed. The reflected field is presented here as a convolution of the target reflectivity and point spread function (PSF) of the imaging system. To obtain a focused 3D image, the proposed algorithm deconvolves the PSF out from the acquired data volume with high speed due to fast Fourier transform and implementation in frequency-wavenumber domain. Then the result is tested against two numerical criteria for efficiency, namely error and instability, whose optimal values can be obtained iteratively. Since the PSF differs with distance, the algorithm suits mainly applications with relatively small objects such as concealed weapon detection. Using several PSFs allows us to image a certain range of interest by their successive deconvolution from the same data. Performance of the algorithm has been evaluated experimentally and compared with that of Kirchhoff migration. Measurements were carried out by a 5–25 GHz synthetic aperture radar in the lab, and scenarios included a gun and a ceramic knife in free space, on a large metal plate, and a gun concealed on a dummy under a thick raincoat. The results demonstrate sufficient image quality obtained in a fraction of time.","radar and homeland security; radar signal processing and system modeling","en","journal article","European Microwave Association","","","","","","","2014-04-03","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:29b6b47d-8cd6-4099-950f-bd4f1f50cf16","http://resolver.tudelft.nl/uuid:29b6b47d-8cd6-4099-950f-bd4f1f50cf16","Ultrahigh throughput plasma processing of free standing silicon nanocrystals with lognormal size distribution","Dogan, I.; Kramer, N.J.; Westermann, R.H.J.; Dohnalova, K.; Smets, A.H.M.; Verheijen, M.A.; Greogorkiewicz, T.; Van de Sanden, M.C.M.","","2013","We demonstrate a method for synthesizing free standing silicon nanocrystals in an argon/silane gas mixture by using a remote expanding thermal plasma. Transmission electron microscopy and Raman spectroscopy measurements reveal that the distribution has a bimodal shape consisting of two distinct groups of small and large silicon nanocrystals with sizes in the range 2–10?nm and 50–120?nm, respectively. We also observe that both size distributions are lognormal which is linked with the growth time and transport of nanocrystals in the plasma. Average size control is achieved by tuning the silane flow injected into the vessel. Analyses on morphological features show that nanocrystals are monocrystalline and spherically shaped. These results imply that formation of silicon nanocrystals is based on nucleation, i.e., these large nanocrystals are not the result of coalescence of small nanocrystals. Photoluminescence measurements show that silicon nanocrystals exhibit a broad emission in the visible region peaked at 725?nm. Nanocrystals are produced with ultrahigh throughput of about 100?mg/min and have state of the art properties, such as controlled size distribution, easy handling, and room temperature visible photoluminescence.","crystal morphology; elemental semiconductors; nanofabrication; nanostructured materials; nucleation; photoluminescence; plasma materials processing; Raman spectra; semiconductor growth; silicon; transmission electron microscopy","en","journal article","American Institute of Physics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Sustainable Energy","","","",""
"uuid:f5d77311-ce6b-4e55-b767-cec44ef2e6da","http://resolver.tudelft.nl/uuid:f5d77311-ce6b-4e55-b767-cec44ef2e6da","Tip-based chemical vapor deposition with a scanning nano-heater","Gaitas, A.","","2013","In this preliminary effort, a moving nano-heater directs a chemical vapor deposition reaction (nano-CVD) demonstrating a tip-based nanofabrication (TBN) method. Localized nano-CVD of copper (Cu) and copper oxide (CuO) on a silicon (Si) and silicon oxide (SiO2) substrate from gasses, namely sublimated copper acetylacetonate (Cu(acac)2), argon (Ar), and oxygen (O2), is demonstrated. This technique is applicable to other materials.","chemical vapour deposition; copper; copper compounds; manufacturing processes; nanofabrication; nanostructured materials","en","journal article","American Institute of Physics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:c9f651d9-28af-4324-96d7-a2c41afe3529","http://resolver.tudelft.nl/uuid:c9f651d9-28af-4324-96d7-a2c41afe3529","Transistor-Level Statistical Timing Analysis: Solving Random Differential Equations Directly","Tang, Q.","Charbon, E. (promotor)","2013","In this Ph.D. thesis, a novel non-MC Random differential Equation based Statistical Timing Analysis (RESTA) method is proposed, which considers both process variations and electrical circuit effects, such as multiple input simultaneous switching and crosstalk effects. To make the approach practical for analysis of large circuits at high accuracy, we propose a Simplified Transistor Model (STM) for transistor-level timing analysis. For statistical delay calculation, instead of simulating thousands of times using Monte Carlo methods or simulating at many corners using corner-based methods, RESTA simulates only once, solving random differential equations directly. Crosstalk effects are taken into account based on our proposed Piecewise Linear Delay change curve Model (PLDM) for statistical interconnect delay calculation.","statistical timing analysis; process variations; transistor level; random differential equation; crosstalk effect","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Miroelectronics","","","",""
"uuid:0b33f5ac-e0e9-4888-8af4-bf10cda5850f","http://resolver.tudelft.nl/uuid:0b33f5ac-e0e9-4888-8af4-bf10cda5850f","Wavefield decomposition based on acoustic reciprocity: Theory and applications to marine acquisition","Van Borselen, R.G.; Fokkema, J.T.; Van den Berg, P.M.","","2013","In marine seismic acquisition, the free surface generates seismic events in our recorded data that are often categorized as noise because these events do not contain independent information about the subsurface geology. Ghost events are considered as such noise because these events are generated when the energy generated by the seismic source, as well as any upgoing wavefield propagating upward from the subsurface, is reflected downward by the free surface. As a result, complex interference patterns between up- and downgoing wavefields are present in the recorded data, affecting the spectral bandwidth of the recorded data negatively. The interpretability of the data is then compromised, and hence it is desirable to remove the ghost events from the data. Rayleigh’s reciprocity theorem is used to derive the relevant equations for wavefield decomposition for multisensor and single-sensor data, for depth-varying and depth-independent recordings from marine seismic experiments using a single-source or dual-source configuration. A comparison is made between the results obtained for a 2D synthetic example designed to highlight the strengths and weaknesses of the various acquisition configurations. It is demonstrated that, using the proposed wavefield decomposition method, multisensor data (measurements of pressure and particle velocity components, or multidepth pressure measurements) allow for optimal wavefield decomposition as independent measurements are used to eliminate the interference patterns caused by the free surface. Single-sensor data using constant-depth recordings are found to be incapable of producing satisfactory results in the presence of noise. Single-sensor data using a configuration with depth-varying measurements are able to deliver better results than when constant-depth recordings are used, but the results obtained are not of the same quality when multisensor data are used.","acquisition; multicomponent; wave equation; acoustic; signal processing","en","journal article","Society of Exploration Geophysicists","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:6bb7c382-3f25-4a64-b3f4-8a08da229185","http://resolver.tudelft.nl/uuid:6bb7c382-3f25-4a64-b3f4-8a08da229185","Ultrasonic processing of aluminum alloys","Zhang, L.","Katgerman, L. (promotor); Eskin, D.G. (promotor)","2013","The research in ultrasonic processing for metallurgical application shows a promising influence on improving casting properties of aluminium alloys. The principle of ultrasonic processing is introduction of acoustic waves with a frequency higher than 17 kHz into liquid metal. Several promising beneficial effects caused by ultrasonic processing in aluminium alloys were observed and researched, such as ultrasonic-aided grain refinement, reduction of thermal contraction during solidification, and ultrasonic degassing. The systematic study of this subject allows us to achieve a better understanding of ultrasonic processing in aluminium alloys. It also provides valuable information for optimization of ultrasonic processing parameters in industrial application.","Ultrasonic processing; Solidification; Aluminum alloys; Microstructure; Casting properties","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","MSE","","","",""
"uuid:0e2b720d-4182-4c25-8c70-33e3405a080b","http://resolver.tudelft.nl/uuid:0e2b720d-4182-4c25-8c70-33e3405a080b","Signal processing in optical coherence tomography for aerospace material characterization","Liu, P.; Groves, R.M.; Benedictus, R.","","2013","Based on a customized time-domain optical coherence tomography (OCT) system, a series of signal processing approaches have been designed and reviewed. To improve demodulation accuracy and image quality, demodulation approaches such as median filter, Hilbert transform, and envelope detector were investigated with simulated as well as experimental data. Without noise, the Hilbert transform has the best performance, but after considering the narrow-band noise in the modulated signal, the envelope detector was selected as the ideal demodulation technique. To reduce noise and enhance image contrast, digital signal processing techniques such as a bandpass filtering and two-dimensional median filtering were applied before and after the demodulation, respectively. Finally with integration of the customized OCT setup and designed signal processing algorithms, aerospace materials, such as polymer coatings and glass-fiber composites, were successfully characterized. The cross-sectional images obtained clearly show the microstructures of the materials.","optical coherence tomography; signal processing; demodulation; median filter; aerospace materials; microstructure","en","journal article","SPIE (International Society for Optical Engineering)","","","","","","","","Aerospace Engineering","Aerspace Structures and Materials","","","",""
"uuid:f40a69d2-f954-4e02-87af-ae8cfc1f11d4","http://resolver.tudelft.nl/uuid:f40a69d2-f954-4e02-87af-ae8cfc1f11d4","Towards high resolution quantitative subsurface models by full waveform inversion","Haffinger, P.; Gisolf, A.; Van den Berg, P.M.","","2013","Full waveform inversion (FWI) has the potential to recover detailed quantitative property models of the subsurface, but the process is computationally expensive. Currently available computer systems do not allow to use the full bandwidth of the acquired seismic data, which effectively reduces the resolution that can be obtained. In this paper, we propose a novel approach to obtain high resolution subsurface models from broad-band FWI. The method is based on localization of the inversion, while subsequently the interaction between local domains is estimated. A global field update is calculated which honours the non-linear relationship between the subsurface properties and the measured seismic data. By using this non-linearity, the spectral gap between the a priori background model and the seismic bandwidth will be closed and spatially broad-band properties can be estimated from a band-limited seismic signal.","image processing; numerical solutions; inverse theory","en","journal article","Oxford University Press on behalf of The Royal Astronomical Society","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:407c0a83-25c1-4cb3-ab4c-5a691d19f949","http://resolver.tudelft.nl/uuid:407c0a83-25c1-4cb3-ab4c-5a691d19f949","Prediction and Optimization of Speech Intelligibility in Adverse Conditions","Taal, C.H.","Lagendijk, R.L. (promotor)","2013","In digital speech-communication systems like mobile phones, public address systems and hearing aids, conveying the message is one of the most important goals. This can be challenging since the intelligibility of the speech may be harmed at various stages before, during and after the transmission process from sender to receiver. Causes which create such adverse conditions include background noise, an unreliable internet connection during a Skype conversation or a hearing impairment of the receiver. To overcome this, many speech-communication systems include speech processing algorithms to compensate for these signal degradations like noise reduction. To determine the effect on speech intelligibility of these signal processing based solutions, the speech signal has to be evaluated by means of a listening test with human listeners. However, such tests are costly and time consuming. As an alternative, reliable and fast machine-driven intelligibility predictors are of interest, since they might replace listening tests, at least in some stages of the algorithm development process. Two important issues exist with current intelligibility predictors. (1) Many of these methods cannot reliably predict the effect of more advanced nonlinear signal processing algorithms on speech intelligibility. (2) Typically, these measures are based on very complex auditory models or use average statistics of minutes of running speech, which makes it difficult on how to design new (real-time) speech processing solutions in an optimal manner given such a measure. To this end we propose several new measures which show good prediction results with the intelligibility of nonlinear processed speech. The newly proposed measures are of a low computational complexity and mathematically tractable which make them suitable for optimization of new signal processing solutions which aim for improving speech intelligibility.","speech processing; speech intelligibility","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:4368c493-b659-47fb-aa43-22503a692a15","http://resolver.tudelft.nl/uuid:4368c493-b659-47fb-aa43-22503a692a15","A method to reduce ambiguities of qualitative reasoning for conceptual design applications","D'Amelio, V.; Chmarra, M.K.; Tomiyama, T.","","2013","Qualitative reasoning can generate ambiguous behaviors due to the lack of quantitative information. Despite many different research results focusing on ambiguities reduction, fundamentally it is impossible to totally remove ambiguities with only qualitative methods and to guarantee the consistency of results. This prevents the wide use of qualitative reasoning techniques in practical situations, particularly in conceptual design, where qualitative reasoning is considered intrinsically useful. To improve this situation, this paper initially investigates the origin of ambiguities in qualitative reasoning. Then it proposes a method based on intelligent interventions of the user who is able to detect ambiguities, to prioritize interventions on these ambiguities, and to reduce ambiguities based on the least commitment strategy. This interaction method breaks through the limit of qualitative reasoning in practical applications to conceptual design. The method was implemented as a new feature in a software tool called the Knowledge Intensive Engineering Framework in order to be tested and used for a printer design.","ambiguity reduction; conceptual design; qualitative process theory; qualitative reasoning; user intervention","en","journal article","Cambridge University Press","","","","","","","2014-01-15","Mechanical, Maritime and Materials Engineering","Biomechanical Engineering","","","",""
"uuid:6ffaa968-8b44-4604-b3ab-8698b69d88a5","http://resolver.tudelft.nl/uuid:6ffaa968-8b44-4604-b3ab-8698b69d88a5","A Benchmark Approach of Counterparty Credit Exposure of Bermudan Option under Lévy Process: The Monte Carlo-COS Method","Shen, Y.; Van der Weide, J.A.M.; Anderluh, J.H.M.","","2013","An advanced method, which we call Monte Carlo-COS method, is proposed for computing the counterparty credit exposure profile of Bermudan options under Lévy process. The different exposure profiles and exercise intensity under different mea- sures, P and Q, are discussed. Since the COS method [1] delivers accurate Bermudan prices, and no change of measure [2] needed to get the P-probability distribution, the exposure profile produced by the Monte Carlo-COS algorithm can be used as a benchmark result, E.g., to analyse the reliability of the popular American Monte Carlo method [3], [4] and [5]. The efficient calculation of expected exposure (EE) [6] can be further applied to the computation of credit value adjustment (CVA) [6].","counterparty credit risk; Monte Carlo-COS method; Bermudan option; Lévy process; American Monte Carlo method; credit value adjustment","en","journal article","Elsevier","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:eb7ed8c3-4c37-4d2c-9d4f-a7571fb3bbde","http://resolver.tudelft.nl/uuid:eb7ed8c3-4c37-4d2c-9d4f-a7571fb3bbde","The use of process simulation models in virtual commissioning of process automation software in drinking water treatment plants","Worm, G.I.M.; Kelderman, J.P.; Lapikas, T.; Van der Helm, A.W.C.; Van Schagen, K.M.; Rietveld, L.C.","","2012","This research deals with the contribution of process simulation models to the factory acceptance test (FAT) of process automation (PA) software of drinking water treatment plants. Two test teams tested the same piece of modified PA-software. One team used an advanced virtual commissioning (AVC) system existing of PA-emulation and integrated process simulation models, the other team used the same PA-emulation but basic parameter relations instead of the process simulation models, the VC-system. Each test team found one (different) error of the thirteen errors put into the software prior to the experiment; the majority of the errors was found prior to the functional test. The team using the AVC-system found three errors, the team using the VC-system found four, but the AVC-team judged 1% of the test items ‘not possible’, the VC-team 17%. It was concluded that the hypothesis that with AVC more errors could be found than with VC could not be accepted. So, for the FAT of PAsoftware of drinking water treatment plants, the addition of basic parameter relations to PA-emulation satisfied. Not the exact process behavior helped to find errors, but the passing of process thresholds.","virtual commissioning; drinking water treatment; process automation; emulation; process simulation model","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:208ae6bd-dc73-4559-959a-c39f7017ed8c","http://resolver.tudelft.nl/uuid:208ae6bd-dc73-4559-959a-c39f7017ed8c","A framework for reaching common understanding during sketching in design teams","Nik Ahmad Ariff, N.S.; Badke-Schaub, P.G.; Eris, O.; Suib, S.S.S.B.","","2012","In this study, we investigate the communication processes during sketching in design teams on theoretical and empirical levels, and propose two frameworks. The first one, the designcommunication block framework, categorizes the types of activities that take place during sketching, and constitutes the analysis scheme for the empirical dimension of the work. The second framework, a framework for reaching common understanding during sketching in design teams, embodies the outcomes of our analysis. Our main finding is that although drawing activity itself forms the basis of team discourse during sketching, explaining, detailing and transfer activities make ideas more concrete, understandable and transferrable within the team. Our findings also show that when verbal communication is blocked, the distinction between drawing activity, and explaining, detailing and transfer activities become even clearer.","sketching, design process, design thinking, communication, cognition","en","conference paper","The Design Society","","","","","","","","Industrial Design Engineering","Product Innovation Management","","","",""
"uuid:e1edbc52-08d3-4e11-95bc-3148545d84f4","http://resolver.tudelft.nl/uuid:e1edbc52-08d3-4e11-95bc-3148545d84f4","Product personality: From analysing to applying","Pourtalebi Hendehkhaleh, S.; Pouralvar, K.","","2012","Nowadays products are expected to undertake their functions properly and the competition for satisfying consumer is in the field of product attachments and emotional characteristics. Products have a symbolic meaning in addition to their utilitarian benefits. This symbolic meaning that refers to physical product and is described with human personality characteristics is called ""product personality"". Consumers make a psychological comparison between their self-concept and the image of a product and the result of this comparison can positively influence product evaluation. The main mission of a designer is a problem solving one and designing a product appearance is a problem solving process as well. Students during the industrial design studios learn how to utilize design techniques in product's functional problem solving, but the emotional design aspects and techniques are usually neglected. Therefore, students should learn how personal characteristics in objects' form can be recognized and how they can apply personality in product's appearance. This paper introduces the method of ""product personality"" design as a problem solving process. This pedagogical process of ""product personality"" has been developed to help industrial design students to improve their abilities to understand form's capabilities and utilize them in designing products' appearance. After learning and practicing this process, students are able to express roles of a product's visual elements performing in product personality. Furthermore they can translate these performances into rules and finally employ rules as solutions in product personality design as problem. The process includes three main phases of analysing, translating and applying.","product personality, aesthetics, pedagogical design process, product appearance; responsibilities, user-product attachment","en","conference paper","The Design Society, Institution of Engineering Designers","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:be2ef2c1-e2a6-4701-a9a2-7e38c88f7985","http://resolver.tudelft.nl/uuid:be2ef2c1-e2a6-4701-a9a2-7e38c88f7985","Idea²market: Implementing an ideation guide for product design education and innovation","De Wit, I.; Du Bois, E.; Moons, I.M.R.; Jacoby, A.","","2012","As current design students will be potential moderators for future design ideation sessions, the research focused on an ideation guide to support them in executing these sessions. Nowadays, mapping the number of tools onto the logic of the innovation process gives an overload of possibilities and reveals the following important difficulties: (i) which tools should be used in each specific situation, and (ii) how to implement the possible tools in the right manner. This research also focuses on the success factors and the conditions to deploy supporting tools in a given innovation process in order to obtain a higher success rate in generating ideas and obtaining the necessary buy-in for it. The arguments are based on literature research and a series of five workshops with experts from academia and industry. In the first part of the paper, the key problems in the use and implementation are brought to light. The second part of the paper focuses on possible solutions for these key problems and results in a moderator guide supporting the ideation process. We hypothesize that design students will be more effective and efficient in creating output with an ideation session supported by the Idea2Market guiding box. The third part of the paper gives evidence of the possibilities of the Idea2Market instruction manual; by describing a test with students. The output of the students supported by the instructions was clearly higher than the results of a control group which was not supported by specific instructions","innovation process, product design, ideation and moderator support","en","conference paper","The Design Society, Institution of Engineering Designers","","","","","","","","Industrial Design Engineering","Product Innovation Management","","","",""
"uuid:560ee5e4-6e1b-49e1-8ca7-97a2effd931d","http://resolver.tudelft.nl/uuid:560ee5e4-6e1b-49e1-8ca7-97a2effd931d","Integrated project delivery: The designer as integrator","Wamelink, J.W.F.; Koolwijk, J.S.J.; van Doorn, A.J.","","2012","Process innovation related to integrated project delivery is an important topic in the building industry. Studies on process innovation through the use of integrated contracts usually focus on contractors, and particularly on the possibility of forward integration into the building process. Three years ago, the first author investigated the process innovation capabilities of architectural firms by using the concept of system integration. This led to the idea that architects could take on the leading role in design-build contracts. Based on the results of that study, the conclusion was drawn that architects certainly have opportunities to act as a system integrator in the building process. By broadening their activities, architects can reclaim their central position, in which design and managerial skills can be combined. As a result of this promising view, a major client (a Dutch school board with a number of schools under its jurisdiction) and an architectural firm decided to develop a new concept. Together with the authors they developed the organisational and juridical aspects of the concept into a designer-led design-build method and implemented it in practice: the design and realisation of school buildings. Two projects were intensively monitored. It turned out that the concept has a lot of advantages for both the client and the architectural firm. This paper describes the specific concept and the results of the two pilot projects, and shows that the recognised advantages are consistent with the literature","designer-led design-build, integrated contracts, process performance","en","conference paper","Birmingham City Universtiy","","","","","","","","Architecture","Real Estate and Housing","","","",""
"uuid:7bd2e26a-9eeb-4693-9473-8978422b75c1","http://resolver.tudelft.nl/uuid:7bd2e26a-9eeb-4693-9473-8978422b75c1","Characterization of processes involved in the reset of a subtidal bar","Blossier, B.; Briere, C.; Roelvink, J.A.; Walstra, D.J.R.","","2012","Sand beach profiles can exhibit nearshore sandbars with complex 3D patterns. Under energetic conditions, these patterns disappear and the bars get to a certain extent alongshore uniform. This phenomenon is called a reset. The existing literature mainly concerns the development of the bar patterns (3D) or the cross-shore migration of sandbars (2D). Studies on reset-events from a three dimensional point of view are limited but can be found for instance in Reniers et al. (2004) and Smit (2010). This paper describes an analysis that is aimed at determining the relevant processes involved in the reset of three dimensional subtidal bars and at describing the relative influence of each of these processes. To perform this study, data collected during the ECORS campaign at Le Truc Vert (France) in 2008 are analyzed. In addition, a numerical approach is performed using a research Delft3D model forced by the Xbeach wave generator to investigate the processes involved in a reset-event. The effects of the hydrodynamic external conditions on the flow patterns in the surfzone are investigated. Then the reset is studied in details in order to understand the role of the different processes taken into account by the numerical model. The incident wave energy controls the intensity of the reset. The bar cross-shore migration is controlled by the wave breaking process. The wave breaking position and the dissipation rate of the roller energy controls the generation of Shoreward Propagating Accretionary Waves (SPAW). The straightening of the subtidal bar occurs when the conditions induce a significant longshore current in the surfzone.","nearshore sandbars, energetic conditions, reset, process-based modeling, bar migration, Truc Vert","en","conference paper","Coastal Engineering Research Council","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:fc2d3631-9881-4a73-8a42-acf1d185e111","http://resolver.tudelft.nl/uuid:fc2d3631-9881-4a73-8a42-acf1d185e111","Towards scaling up grassroots innovations in India: A preliminary framework","De Keersmaecker, A.E.K.; Parmar, V.S.; Kandachar, P.V.; Baelus, C.; vandenbempt, K.","","2012","Grassroots innovations (GI) include need-based products or services that are created by individuals or groups within local communities. These products and services have potential to contribute to the quality of an individual’s life, and on a larger scale contribute to the development of a community by creating new business activities. The grassroots innovations are often created in a resource constrained environment; with limited access to formal knowledge, infrastructure and materials, and limited buying power. Although GIs have potential to be a commercial success, scaling up and commercialization of grassroots innovations is often inhibited because of a lack of formal education among innovators, absence of entrepreneurial culture and supporting infrastructure in the given context. This paper elaborates the significance of GIs for people in developing countries. Grassroots innovations can be a subject of business development and be significant to empower local communities. In order to live up to its potential, it is important to understand the mechanisms on how to scale up a grassroots innovation and overcome inhibiting factors. Until now, only a limited number of grassroots innovations have been scaled up or have been commercially launched in the developing countries. In India for instance, some governmental organizations are supporting grassroots innovations which have potential to be successful in the market. To get insights in the up scaling process, we propose to learn by examining existing scaling up cases. Based on these insights, solutions can perhaps be suggested to optimize the scaling up process. A preliminary framework is proposed to identify design drivers articulated by grassroots innovators and up-scalers towards successful scaling up. Thereby the framework suggests design drivers retrieved from literature could be crucial for scaling up grassroots innovations successfully. It is essential to understand how these design drivers are reached. Conclusions are drawn to facilitate the construction of the framework.","grassroots innovations; scaling up process; product development; sustainable business; developing countries","en","conference paper","","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:a0012011-3b60-4dab-93de-728d8832005e","http://resolver.tudelft.nl/uuid:a0012011-3b60-4dab-93de-728d8832005e","Innovating innovation: Towards a NPD-management taxonomy","Smulders, F.E.H.M.; Brehmer, M.","","2012","This paper reports on a government funded research project that was aimed at the development of a meta-method to improve NPD-processes in industry by the use of tools and methods available in literature. We choose to develop an industry-relevant taxonomy that could serve as means to categorise the NPD-tools and at the same time could facilitate the process of improving the NPD process itself, hence 'innovating innovation'. A design inclusive research process with various design and probe cycles resulted in the first reliable version of the taxonomy. Five case studies provided the view from the NPD trenches that informed the creation of an industry relevant taxonomy covering all NPD tools from literature. The subsequent design and probe cycles were performed with heavy involvement of the different potential users from industry, consultancies and academic institutions. During the design cycles the taxonomy and its operational method were tested and refined and named the NPD Management Canvas. The enthusiastic reactions in a final proof-of concept showed us the value of the meta-method and the reliability of our taxonomy. Therefore the Innovation Management Canvas proposed in this paper looks very promising for application by academics (tailoring research portfolio) and by industry to innovate their innovation processes.","innovating innovation; NPD-methods; taxonomy; innovation process management","en","conference paper","CINet","","","","","","","","Industrial Design Engineering","Product Innovatie Management","","","",""
"uuid:6aaa10fb-6cda-4ce0-8040-41a58a4ce443","http://resolver.tudelft.nl/uuid:6aaa10fb-6cda-4ce0-8040-41a58a4ce443","Does sketching stand alone as a communication tool during concept generation in design teams?","Nik Ahmad Ariff, N.S.; Badke-Schaub, P.G.; Eris, O.","","2012","The present study investigates the relation between sketching and communication in teams during the idea generation process in early concept generation. A quasi-experiment study has been conducted with Masters students of Industrial Design Engineering at Delft University of Technology, Netherlands. Six groups consisting of three students had to solve a design problem in a given time. Whereas the experimental groups (n=3) were not allowed to talk during the design process, the control groups (n=3) did not face any restrictions. The experiments were recorded, observed and analyzed. As expected, both groups used communication to transfer and support their individual ideas. For the experimental groups, the written language became the medium of communication in detailing the information of sketches. These findings show that sketching cannot stand alone; design teams need to use sketching and verbal communication in conjunction not only to produce well-developed ideas, but also to transfer them.","sketching; design thinking; design process; creativity; communication","en","conference paper","Design Research Society","","","","","","","","Industrial Design Engineering","Product Innovation Management","","","",""
"uuid:5030a273-b90d-4c4e-8109-0864c8927488","http://resolver.tudelft.nl/uuid:5030a273-b90d-4c4e-8109-0864c8927488","Effective Learning Environments in Relation to Different Learning Theories","Guney, A.; Al, S.","","2012","There are diverse learning theories which explain learning processes which are discussed within this paper, through cognitive structure of learning process. Learning environments are usually described in terms of pedagogical philosophy, curriculum design and social climate. There have been only just a few studies about how physical environment is related to learning process. Many researchers generally consider teaching and learning issues as if independent from physical environment, whereas physical conditions play an important role in gaining knowledge; in learning. Schools’ applications of learning theories had better determine morphological characteristics of them. Designers should follow a holistic approach to create effective learning environments. Nonetheless, this study tends to search for diverse learning theories and the description of related learning environments corresponding to each theory. School designers should try to create suitable morphological compositions to support them and should suggest design criteria for convenient spheres. Finally, this approach infers some conclusions through out this paper.","cognitive structure; learning and learning process; learning theories; learning environment; school design","en","journal article","Elsevier","","","","","","","","Architecture","Urbanism","","","",""
"uuid:5d5bba69-3913-427b-87f4-a93a689ceff7","http://resolver.tudelft.nl/uuid:5d5bba69-3913-427b-87f4-a93a689ceff7","Implementing participatory water management: Recent advances in theory, practice, and evaluation","Von Korff, Y.; Daniell, K.A.; Moellenkamp, S.; Bots, P.W.G.; Bijlsma, R.M.","","2012","Many current water planning and management problems are riddled with high levels of complexity, uncertainty, and conflict, so-called “messes” or “wicked problems.” The realization that there is a need to consider a wide variety of values, knowledge, and perspectives in a collaborative decision making process has led to a multitude of new methods and processes being proposed to aid water planning and management, which include participatory forms of modeling, planning, and decision aiding processes. However, despite extensive scientific discussions, scholars have largely been unable to provide satisfactory responses to two pivotal questions: (1) What are the benefits of using participatory approaches?; (2) How exactly should these approaches be implemented in complex social-ecological settings to realize these potential benefits? In the study of developing social-ecological system sustainability, the first two questions lead to a third one that extends beyond the one-time application of participatory approaches for water management: (3) How can participatory approaches be most appropriately used to encourage transition to more sustainable ecological, social, and political regimes in different cultural and spatial contexts? The answer to this question is equally open. This special feature on participatory water management attempts to propose responses to these three questions by outlining recent advances in theory, practice, and evaluation related to the implementation of participatory water management. The feature is largely based on an extensive range of case studies that have been implemented and analyzed by cross-disciplinary research teams in collaboration with practitioners, and in a number of cases in close cooperation with policy makers and other interested parties such as farmers, fishermen, environmentalists, and the wider public.","adaptive management; collaborative decision making; evaluation; interactive planning; participatory modeling; participatory research; process design; public participation; social learning; stakeholder participation; water resources management","en","conference paper","Acadia University Canada","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:55930a1e-61ba-4bec-8fd8-0d96eaca5db3","http://resolver.tudelft.nl/uuid:55930a1e-61ba-4bec-8fd8-0d96eaca5db3","Multiscale Structure-Performance Relationships in Supported Palladium Catalysis for Multiphase Hydrogenations","Bakker, J.J.W.","Moulijn, J.A. (promotor); Kapteijn, F. (promotor); Kreutzer, M.T. (promotor)","2012","The performance of heterogeneous catalysts in multiphase reactions in general is governed by different types of extrinsic and intrinsic structural effects on all length scales, i.e., on the macro- (m to cm), meso- (mm to µm), and microlevel (nm). This PhD research, with a catalysis-engineering approach, focused on several of these multiscale structure-performance relationships of supported palladium (Pd) catalysts applied in, industrially important, multiphase hydrogenations. The structure-performance relationships were studied in various batch and continuous reactors of which most are related to important topics in process intensification such a monolithic reactors and flow chemistry. The performance of monolithic Pd catalysts was enhanced by combining a new type of structured highly porous monoliths with a pressure pulse generating gas-liquid flow (i.e., Taylor flow). This induced a convective flow inside the ‘open’ monolith walls thereby enhancing the mass exchange with the Pd catalyst. This favourable result opens the avenue to higher catalyst loadings without increasing internal mass transfer limitations. Furthermore, a proof of concept study showed that a cheap and readily available gas chromatography capillary, wall-coated with an alumina-supported Pd catalyst and operated in the Taylor flow regime, can be used to synthesize high-value products and to rapidly produce (visual) information about catalytic hydrogenations. This Pd capillary flow device is an excellent alternative for expensive microchip technology and bulky round-bottom flasks. Finally, the intrinsic property of Pd to absorb hydrogen into its crystal lattice was shown to have a strong influence on its performance in the hydrogenation of aromatic nitriles. The transformation into stable Pd ?-hydride above a certain threshold hydrogen pressure induced a persistent change in activity and by-product selectivity.","Heterogeneous catalysis; Hydrogenation; Structured reactors; Palladium; Monoliths; Flow chemistry; Catalyst performance; Nitriles; Azides; Alkynes; Taylor flow; Residence time distribution; process intensification; Multiphase hydrogenation","en","doctoral thesis","","","","","","","","","Applied Sciences","Chemical Engineering","","","",""
"uuid:3d78c977-ef85-4f9f-bcba-2e0a37c13745","http://resolver.tudelft.nl/uuid:3d78c977-ef85-4f9f-bcba-2e0a37c13745","Silicon Technology for Integrating High-Performance Low-Energy Electron Photodiode Detectors","Sakic, A.","Nanver, L.K. (promotor)","2012","","silicon photodiodes; p+n diode; Scanning Electron Microscopy; electron detector; low-energy electrons; responsivity; electron irradiation; diode saturation current; pure boron layer; boron depositions; ultrashallow junctions; silicon epitaxy; high-resistivity substrates; substrate thinning; RC constant; Aluminum-induced Crystallization; low-temperature processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","ECTM","","","",""
"uuid:be9e3622-1355-4916-9948-1c336dcbc7c6","http://resolver.tudelft.nl/uuid:be9e3622-1355-4916-9948-1c336dcbc7c6","Engineering Flexible and Agile Services: A Reference Architecture for Administrative Processes","Gong, Y.","Tan, Y.H. (promotor); Janssen, M.F.W.H.A. (promotor)","2012","To provide up-to-date services towards citizens and businesses, administrative organizations need to make sure their business services and processes and supporting applications are flexible and agile enough to deal with changing situations and ensure legal compliance all the time. The research presented in this dissertation provides a reference architecture to improve the flexibility and agility of business processes for administrative organizations. The reference architecture was tested using a prototype and two pilot services according to scenarios from different organizational contexts. The test indicates that the implementation of the reference architecture allows for quick adaption of business processes at low cost, and ensures the legal compliance of the business processes.","reference architecture; business process management; flexibility; agility","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","Infrastructure Systems & Services","","","",""
"uuid:68883d84-3cf3-4b5c-a8ee-9c0a92604fce","http://resolver.tudelft.nl/uuid:68883d84-3cf3-4b5c-a8ee-9c0a92604fce","Deblending of seismic data","Mahdad, A.","Mulder, W.A. (promotor)","2012","Seismic imaging is one of the most common geophysical techniques for hydrocarbon exploration. Seismic acquisition is a trade-off between economy and quality. In conventional acquisition, the time intervals between successively firing sources are large enough to avoid interference in time. To obtain an efficient survey, the spatial source sampling is often (too) large, which results in spatial source aliasing. Simultaneous or blended acquisition was proposed by Beasley et al. (1998) and Berkhout (2008) in order to address this issue. In blended acquisition, temporal overlap between shot records is allowed. This additional degree of freedom in survey design has the potential to significantly reduce seismic acquisition costs while maintaining or even improving the data quality. Incoherent shooting plays a major role in the blended acquisition. It aims at preserving the energy distribution over the entire data bandwidth. There are several parameters that have to be considered in blended acquisition. Source encoding, lateral source configuration, blending factor, and survey condition are the most important parameters to be taken into account. These parameters are closely related and should not be considered irrespectively of each other. The acquired blended data can be processed in two ways: direct imaging and deblending. Deblending, which is the main focus of this thesis, is the procedure of retrieving the data as if they were acquired in the conventional, unblended way. After deblending the conventional, standard processing flows can be applied in practice. Since deblending is an underdetermined problem, a unique solution can not be achieved by matrix inversion. A least-squares solution could be used instead. However, the least-squares solution does not remove the interference due to other sources, called the blending noise. Fortunately, the character of the blending noise is different in different domains, i.e., it is coherent in the common source domain, but incoherent in the common receiver, common offset, and common midpoint domains. At the same time, the signal remains coherent in all domains. The incoherent character of the blending noise is directly related to the blended acquisition design. The coherence of the signal and the incoherence of the blending noise are the key properties that are used for deblending. In this thesis, an iterative inversion method is proposed for deblending, which is based on estimation and subtraction of the blending noise. In this method, the blending noise is modelled from the estimated signal and subtracted in an iterative fashion. The signal estimate is achieved by a process called coherencepass filtering, which consists of a filter in some domain followed by a thresholding step. At each iteration, the threshold is lowered and more of the blending noise is estimated and subsequently removed. Any type of filter that is capable of distinguishing between coherent signal and incoherent blending noise can be utilized in the coherence-pass filtering process. Three implementations of a coherence-pass filter, namely a median filter, an f-k filter, and a combined median-f-k filter are discussed. Among these, the combined median-f-k filter is a better choice due to the fact that it combines the median filter power in detecting blending noise with the f-k filter power in preserving the signal amplitude. The deblending process can be implemented in different data domains. The domain that is selected for deblending depends to a great extent on the blended acquisition design, acquisition geometry, and data properties. The algorithmic aspects of the deblending algorithm that are discussed in this thesis are related to the threshold automation, stopping criterion, filter edge artefacts, and signal estimation errors. The automation of the thresholding process that is based on the filter impact on the blending noise amplitude reduction, leads to a hands-off algorithm for deblending, optimized both for efficiency and effectiveness. The stopping criterion is based on a least-squares measure that is computed after each iteration. The deblending process is stopped when the measure reaches a stable state where no or negligible improvement is achieved. Furthermore, it is shown that one of the major limiting factors is related to edge artefacts generated by the filter. The blending noise that is estimated by the coherence-pass filtering is called the signal estimation error and is mainly caused by constructive or destructive interference of the blending noise with the signal. The effect of the signal estimation errors is evaluated by introducing errors in the coherence-pass filtering process. The result of this analysis shows that these signal estimation errors can be handled properly. The addressed practical considerations are mainly coherence and noise related issues. The incoherence in the signal is mainly caused by the irregularities in the acquisition geometry, near surface complexities, and topographic variations. Since coherence of the signal plays an essential role in deblending, the incoherence in the signal must be minimized prior to deblending. On the other hand, some of the noise-related issues can be handled during deblending. Proper handling of the practical issues is key to the success of the deblending process. The feasibility of the deblending algorithm is studied by applying it to three conventionally acquired datasets that are blended numerically. In the first example, 2D marine data are blended using upsweep and downsweep signals as source codes. Due to the favourable sophisticated source encoding, the deblending process can be performed per blended shot record using thresholding only, i.e., without blending noise filtering. In the second example, 2D land data are blended by time delays as source codes. In this case, the deblending process is performed in the common offset domain. Results are shown both for the data and their stack. In the last example, 3D land data are blended using two different blending configurations and their deblending results are compared. In this case the deblending process in performed in the common receiver domain. Overall, the obtained results are considered very promising.","acquisition; seimic data; processing; inversion; blending; deblending","en","doctoral thesis","Uitgeverij BOXPress, Oisterwijk, The Netherlands","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:9e12cde6-e127-4e5a-aea5-e57fcb6b8a32","http://resolver.tudelft.nl/uuid:9e12cde6-e127-4e5a-aea5-e57fcb6b8a32","Long-term process-based morphological modeling of large tidal basins","Dastgheib, A.","Roelvink, J.A. (promotor)","2012","The morphology of tidal basins includes a wide range of features developing along different spatial and temporal scales. Examples are shoals, channels, banks, dunes and ripples. Coastal engineers use their engineering tools to answer questions on the processes governing the short term (< decades) development of these morphological features. Geologists apply their conceptual models and reconstruction methods to answer questions related to a much longer time scale (> centuries). This two-sided approach has left us with limited understanding of processes occurring on intermediate scales (> decades and < centuries), whereas the morphodynamics of these intermediate scales are of special concern to sustainable coastal zone management. This study is part of a collective effort to bridge the aforementioned gap by extending the use of coastal engineering tools (process-based models) to geological time scales to provide more understanding of the physical processes governing the long-term morphodynamic behavior of tidal basins. A fundamental question addressed is whether or not process-based models can reproduce trustworthy long-term developments. To answer this question the Dutch Waddenzee is chosen as a reference case. This study suggests that the question has a positive answer. By comparing model results with measured developments in the Waddenzee, this study shows that a process-based model can reproduce channel-shoal patterns and their long-term development qualitatively well. Modeled parameters such as area, volume and height of the inter-tidal flats obey the databased equilibrium equations. This study also demonstrates the models' ability to qualitatively assess the impact of large scale human intervention in a tidal basin. For example, the model is able to reproduce the change in tidal transport regime and the ensuing morphodynamic changes due to an extreme impact such as the closure of the Zuiderzee. Although the highly schematized simulations produced qualitatively good results, they also revealed the need for a better process description. As the first step to improve model performance a methodology was developed to account for sediment composition and distribution in the bed. In the next step different methodologies to schematize wave action for long-term morphological simulations were investigated. investigated the wave climate. Model results show that the chronology of wave conditions and the wave schematization approach have a limited effect. The outcome of long-term morphodynamic simulations with different wave and tidal conditions are in good agreement with conceptual models. For the reference case, model results revealed that the morphological impact of wind waves is not only important outside the inlet and at the ebb-tidal delta, but also within the tidal basin. A final conclusion is that adding methodologies for bed composition and wave schematization to the model of the Waddenzee area improved the hindcasting simulations qualitatively.","morphology; tidal basin; tidal inlet; ebb tidal delta; Delft3D; Process-based Modelling; Waddenzee; afsluitdijk; morphological factor; sediment mixture; morphological tide","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:986ea1c5-9e30-4aac-ab66-4f3b6b6ca002","http://resolver.tudelft.nl/uuid:986ea1c5-9e30-4aac-ab66-4f3b6b6ca002","Reinforcement Learning on autonomous humanoid robots","Schuitema, E.","Jonker, P.P. (promotor); Babuska, R. (promotor)","2012","Service robots have the potential to be of great value in households, health care and other labor intensive environments. However, these environments are typically unique, not very structured and frequently changing, which makes it difficult to make service robots robust and versatile through manual programming. Having robots learn to solve tasks autonomously through interaction with the real world forms an attractive alternative. With Reinforcement Learning (RL), a system can learn to perform tasks by receiving only coarse feedback on its actions: desired behavior is reinforced by positive rewards, undesired behavior is punished by negative rewards. In this research, a bipedal walking robot named Leo was designed and built specifically to study the application of RL to real robots. Robot Leo is able to learn two basic motor control tasks: placing a foot on a step of stairs, and walking. To learn to walk, Leo receives a positive reward for moving its foot forward, and negative rewards for falling and for spending time and energy. This process takes about 5 hours of practice in simulation, as well as thousands of falls. On the real prototype, the learning time was shortened by first letting the robot observe a hand coded, sub-optimal controller, which it was quickly able to mimic and even improve in a matter of hours. Algorithmic improvements are proposed to address complications of RL on real robots, such as time delays in the control loop and large disturbances such as a sudden push. To reduce the continuous risk of damage due to the trial-and-error nature of RL, a modular approach is proposed through which the robot can coarsely but quickly learn about the risk of its behavior and learn the actual task more safely and in more detail.","robotics; robots; reinforcement learning; markov decision process; temporal difference learning","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","BioMechanical Engineering","","","",""
"uuid:01eaf735-46af-46a8-8222-7c8c12080062","http://resolver.tudelft.nl/uuid:01eaf735-46af-46a8-8222-7c8c12080062","Integration of drinking water treatment plant process models and emulated process automation software","Worm, G.I.M.","Rietveld, L.C. (promotor)","2012","The objective of this research is to limit the risks of fully automated operation of drinking water treatment plants and to improve their operation by using an integrated system of process models and emulated process automation software. This thesis contains the design of such an integrated system. The use of the system is investigated in the three identified applications, i) optimization of process control, ii) training of operation supervisors and iii) virtual commissioning of process automation software.","drinking water treatment; simulator; emulation; process automation; operator training","en","doctoral thesis","Water Management Academic Press","","","","","","","2012-11-12","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:176b3496-dee3-4326-8e0f-277192a03306","http://resolver.tudelft.nl/uuid:176b3496-dee3-4326-8e0f-277192a03306","Compressive Sensing and Fast Simulations: Applications to Radar Detection","Anitori, L.","Hoogeboom, P. (promotor)","2012","In most modern high-resolution multi-channel radar systems one of the major problems to deal with is the huge amount of data to be acquired, processed and/or stored. But why do we need all these data? According to the well known Nyquist-Shannon sampling theorem, real signals have to be sampled at at least twice the signal bandwidth to prevent ambiguities. Therefore, sampling of very wide bandwidths may require Analog to Digital Converter (ADC) hardware that is unavailable or very expensive; especially in multi-channel systems, the cost and power consumption can become critical factors. In applications involving interleaving of radar modes in time or space (antenna aperture), multi-function operation often leads to conflicting requirements on sampling rates in both time and spatial domains. So while, on one hand, the increased number of degrees of freedom improves the system performance, on the other hand it puts a significant burden on both the off-line analysis and performance evaluation of sophisticated detectors, and on the real time acquisition and processing. For example, space-time adaptive processing algorithms significantly enhance the detection of targets buried in noise, clutter and jamming. However, evaluating the optimal filter weights is an immense computational load when simulating such detectors in the design phase as well as in real time implementation. In some cases, measurement time may also be a constraint, as in 3D radar imaging for airport security inspection of passengers. Conventional acquisition of a full 3D high resolution image requires a measurement time that can be unacceptable in this situation. In this thesis we investigate sampling methods that can deal with the problems of processing complexity as well as analysis (or performance evaluation) extremely efficiently by reducing the required amount of samples. By cleverly using properties of the signals or random variables involved, the considered techniques, namely Compressive Sensing (CS) and Importance Sampling (IS), both alleviate the burden related to data handling in complex radar detectors. These methods, although very different in nature, provide an alternative to classical sampling techniques. The first, compressive sensing, is based on a revolutionary acquisition and processing theory that enables reconstruction of sparse signals from a set of measurements sampled at a much lower rate than required by the Nyquist-Shannon theorem. This results in both shorter acquisition time and reduced amount of data. The second, importance sampling, has roots in statistical physics and represents a fast and effective method for the design and analysis of detectors whose performance have to be evaluated by simulations. By efficiently sampling the underlying probability density function, importance sampling provides a very fast alternative to conventional Monte Carlo simulation. The first part of the thesis deals with the design and analysis of adaptive detectors for compressive sensing based radars. In systems using compressive sensing, the target signal, which is assumed to be sparse, is estimated from the noisy, undersampled measurements via L1-norm minimization algorithms. CS recovery algorithms require proper setting of parameters (thresholds) and are therefore not inherently adaptive. Classical radar systems employ a matched filter, matched to the transmitted waveform, followed by a Constant False Alarm Rate (CFAR) processor for the detection of targets embedded in unknown background clutter and noise. However, the non-linearity introduced by a CS recovery algorithm does not allow straightforward application of conventional adaptive detector design methodology. In the work reported here, by making use of the properties of the Complex Approximate Message Passing algorithm, we are able to propose an adaptive non-linear recovery stage combined with classical CFAR processing, and derive a novel adaptive CS detector. Additionally, our theoretical findings are also demonstrated via both simulated and experimental results. Furthermore, we provide a methodology to predict the performance of the proposed detectors that can be used to evaluate how transmitted power can be traded against undersampling, making it possible to incorporate CS-based sampling and detection in radar system design. The second part of this thesis focuses on deriving methods of importance sampling for fast simulation of rare events especially applicable to Space Time Adaptive Processing (STAP) radar detectors. These type of methods are, however, of much wider applicability. They can and have been used in many other situations that require intensive and time-consuming Monte Carlo simulations. In conducting rare event simulations of systems that involve signal processing operations that are mathematically complex, there are two principal issues that contribute to simulation time. The first issue concerns the rare event itself whose probability is being sought. The second concerns the computational intensity that accompanies the signal processing. It is a daunting task to conduct conventional Monte Carlo simulations that involve several millions of trials to estimate low false alarm probabilities, with as many matrix inversions, as required in STAP. We demonstrate how fast simulation schemes can deal with these aspects, and devise tailored importance sampling biasing schemes for evaluating the performance of STAP detectors which are analytically difficult or impossible to analyze, such as low rank STAP detectors. By comparing our results with traditional Monte Carlo methods, we show that importance sampling can achieve tremendous gain in terms of computational time.","compressive sensing; compressed sensing; compressive sampling; importance sampling; fast simulations; Monte Carlo; radar; detection; Approximate Message Passing; Complex Aprroximate Message Passing; Constant False Alarm Rate; Space Time Adaptive Processing; L1-norm; Receiver Operating Characteristic","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience and Remote Sensing","","","",""
"uuid:bff4c9dc-21d5-40dd-8073-2f7859f4b13d","http://resolver.tudelft.nl/uuid:bff4c9dc-21d5-40dd-8073-2f7859f4b13d","The cessation of continuous turbulence as precursor of the very stable nocturnal boundary layer","Van de Wiel, B.J.H.; Moene, A.F.; Jonker, H.J.J.","","2012","The mechanism behind the collapse of turbulence in the evening as a precursor to the onset of the very stable boundary layer is investigated. To this end a cooled, pressure-driven flow is investigated by means of a local similarity model. Simulations reveal a temporary collapse of turbulence whenever the surface heat extraction, expressed in its nondimensional form h/L, exceeds a critical value. As any temporary reduction of turbulent friction is followed by flow acceleration, the long-term state is unconditionally turbulent. In contrast, the temporary cessation of turbulence, which may actually last for several hours in the nocturnal boundary layer, can be understood from the fact that the time scale for boundary layer diffusion is much smaller than the time scale for flow acceleration. This limits the available momentum that can be used for downward heat transport. In case the surface heat extraction exceeds the so-called maximum sustainable heat flux (MSHF), the near-surface inversion rapidly increases. Finally, turbulent activity is largely suppressed by the intense density stratification that supports the emergence of a different, calmer boundary layer regime.","atmosphere-land interaction; heat budgets/fluxes; small scale processes; stability; surface fluxes; nonlinear models","en","journal article","American Meteorological Society","","","","","","","2013-05-01","Applied Sciences","Multi-Scale Physics","","","",""
"uuid:0bcf237f-57df-456c-bc88-bf1dc9da074b","http://resolver.tudelft.nl/uuid:0bcf237f-57df-456c-bc88-bf1dc9da074b","Process mining approach for recovery of realized train paths and route conflict identification","Kecman, P.; Goverde, R.M.P.","","2012","Data records from train describer systems are a valuable source of information for analyzing railway operations performance and assessing railway timetable quality. This paper presents a tool based on process mining event data records from the Dutch train describer system TROTS. The underlying algorithms automatically identify route conflicts with conflicting trains, determine accurate arrival and departure times/delays at stations, and reconstruct the train paths on track section and blocking time level. Graphical user interface and visualizations of the time-distance diagrams and blocking time diagrams support and simplify the analysis of running times, dwell times, incidents, track obstructions, disruptions, and structural errors in the timetable design. The case study of a one day of traffic on a busy railway corridor in the Netherlands is presented, as well as the examples to describe the graphical user interface.","train describers; realisation data; process mining; route conflicts","en","conference paper","TRAIL Research School","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","",""
"uuid:88d3c622-86e9-4841-ab5b-70b0528ad4c4","http://resolver.tudelft.nl/uuid:88d3c622-86e9-4841-ab5b-70b0528ad4c4","Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering for Discontinuous Galerkin Solutions: Improved Errors Versus Higher-Order Accuracy","King, J.; Mirzaee, H.; Ryan, J.K.; Kirby, R.M.","","2012","Smoothness-increasing accuracy-conserving (SIAC) filtering has demonstrated its effectiveness in raising the convergence rate of discontinuous Galerkin solutions from order k + 12 to order 2k + 1 for specific types of translation invariant meshes (Cockburn et al. in Math. Comput. 72:577–606, 2003; Curtis et al. in SIAM J. Sci. Comput. 30(1):272–289, 2007; Mirzaee et al. in SIAM J. Numer. Anal. 49:1899–1920, 2011). Additionally, it improves the weak continuity in the discontinuous Galerkin method to k ? 1 continuity. Typically this improvement has a positive impact on the error quantity in the sense that it also reduces the absolute errors. However, not enough emphasis has been placed on the difference between superconvergent accuracy and improved errors. This distinction is particularly important when it comes to understanding the interplay introduced through meshing, between geometry and filtering. The underlying mesh over which the DG solution is built is important because the tool used in SIAC filtering—convolution—is scaled by the geometric mesh size. This heavily contributes to the effectiveness of the post-processor. In this paper, we present a study of this mesh scaling and how it factors into the theoretical errors. To accomplish the large volume of post-processing necessary for this study, commodity streaming multiprocessors were used; we demonstrate for structured meshes up to a 50× speed up in the computational time over traditional CPU implementations of the SIAC filter.","high-order methods; discontinuous Galerkin; SIAC filtering; accurancy enhancement; post-processing; hyperbolic equations","en","journal article","Springer-Verlag","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:ca0cd7fa-e8ad-4c2b-b214-bd0525d763bf","http://resolver.tudelft.nl/uuid:ca0cd7fa-e8ad-4c2b-b214-bd0525d763bf","Process Intensification in Crystallization: Submicron Particle Generation Using Alternative Energy Forms","Radacsi, N.","Stankiewicz, A.I. (promotor)","2012","Crystallization is one of the oldest separation and product formation techniques that continues to be in use today. Despite its long history, it only started to develop significantly in the past few decades. In this thesis, the application of Process Intensification in crystallization is investigated. Process Intensification is a set of often radically innovative principles in process and equipment design, which can bring significant benefits in terms of process and chain efficiency, capital and operating expenses, quality, wastes, process safety, etc. Alternative energy forms as basic elements of Process Intensification are investigated by applying electric fields and plasma technology in crystallization processes. Three main topics are discussed in this thesis: a) Submicron-sized and nano-sized particles can have beneficial product properties compared to conventionally sized crystalline products. Electrospray Crystallization, an advanced crystallization technique can serve as a tool to produce such submicron-sized particles. In this thesis, it was investigated whether electrospray crystallization can be used to produce 1. energetic materials with a reduced sensitivity and 2. submicron-sized pharmaceutical compounds for improved dissolution and absorption. Electrospray crystallization of a solution is an integrated process of spraying and crystallization that uses a high voltage to produce a fine aerosol of droplets in the micron-size range. During the process, the emitted solvent droplets evaporate and a droplet disruption process (Coulomb-fission) occurs, which creates even smaller droplets. Due to solvent evaporation, eventually supersaturation is achieved and crystals of submicron particles can commence. Electrospray crystallization is an efficient, cost-effective and simple method for the production of submicron-sized crystals, but it suffers from a low production rate and it could be challenging to scale up. In this thesis, the process parameters for establishing a stable jet for producing submicron-sized particles were determined. The operation window to establish a continuous jet and produce submicron-sized crystals is relatively narrow, but experimentally feasible to maintain. Energetic crystals of RDX and HMX were produced with a mean size of around 500 nm by electrospray crystallization. The produced explosive crystals were tested for impact and friction sensitivity. The samples were remarkably insensitive to friction stimuli, while an insignificant difference for the impact sensitivity was observed. With similar process parameters, submicron-sized crystals of a poorly water-soluble active pharmaceutical ingredient, niflumic acid, were produced. In the absence of excipients, for the case of the submicron-sized niflumic acid, no significant difference was shown in the dissolution profile compared to the conventional one. However, upon mixing the excipients, D-Mannitol and Poloxamer 188, with the submicron-sized niflumic acid, the dissolution rate of the drug was enhanced. Thus, it is possible to increase the bioavailability of drugs by drastically reducing the crystal size, while preventing their aggregation by using the proper excipients. b) Plasma Crystallization is a new crystallization technique, in which an atmospheric pressure cold ionized gas is used to generate submicron-sized crystals. This novel type of plasma, the Surface Dielectric Barrier Discharge (SDBD), is a plasma made by several self-terminating microdischarges on a surface. A nebulizer system sprays the solution aerosol into the plasma with the help of a carrier gas. The plasma charges and heats the droplets. Upon evaporation Coulomb-fission occurs, supersaturation increases, and nucleation and crystal growth take place within the small, confined volume offered by the small droplets. Compared to the electrospray crystallization, much higher production rates can be achieved. The energetic material, RDX, and the active pharmaceutical ingredient, niflumic acid, and its excipient, Poloxamer 188, were produced by plasma crystallization with a significant size reduction compared to the conventional products. While there was no measurable change in the sensitivity of RDX, a substantial increase in the dissolution rate of the submicron niflumic acid crystals was observed in the presence of the plasma-made excipient. c) The effect of a constant high electric field was investigated during the cooling crystallization of isonicotinamide in 1,4-dioxane (Electrostatic Crystallization). Two experimental setups were built in order to examine the electric field effect, with a focus on crystal polymorphism control. An inhomogeneous electric field was generated in a controlled crystallization environment. A Crystalline station with an on-board camera system offered in situ investigation of the experiments. A more homogeneous electric field was generated in a different setup, but without a precise temperature control. Image analysis from the Crystalline station experiments showed that the applied electric field induced fluid motion of the solution due to the Lorentz-force acting on the isonicotinamide molecules in solution. This induced fluid dynamics was further visualized by using a suspension of the isonicotinamide-1,4-dioxane system. Image analysis also showed that the nucleation was localized to the anode, and crystals were formed only on the anode surface. The electric field generated a concentration gradient, with the highest solution concentration around the anode. The crystal growth rate was also measured with the help of the on-board camera system. It was found that in the presence of the electric field, the growth rate of the isonicotinamide crystals formed on the anode is 15 times higher than in the absence of the electric field. From this crystal growth rate increase, the local supersaturation ratio increase was estimated at the anode, and found to be at least 2.5 times higher in the presence of the electric field, than in the absence. In the absence of the electric field, the metastable, chain-like form I isonicotinamide was crystallized in both experimental setups. In the inhomogeneous electric field, both form I and form II of isonicotinamide were crystallized. By applying a more or less homogeneous, constant electric field during the crystallization, only the stable form II was formed. In addition, concerns regarding the reliability of standard small-scale sensitivity tests methods for submicron-sized explosives were discussed in this thesis, since the obtained results for the produced explosive materials are questionable. In order to test the quality of the produced submicron-sized energetic materials, a series of small-scale sensitivity tests were carried out. Impact and friction sensitivity tests and ballistic impact chamber test were performed to determine the product sensitivity. Concerns were found with the standard friction and ballistic impact chamber sensitivity test methods, and suggestions were made to improve these tests. The friction sensitivity for all submicron-sized crystals showed no ignition even at the highest possible load. The ballistic impact chamber tests showed also no or only partial ignition with all the submicron-sized explosives. The submicron-sized crystals were distributed among the grooves of the porcelain plate used in the friction test or among the sand grains of the sandpaper used in the ballistic impact chamber test. There is a need to revisit the ignition mechanism of these sensitivity test methods, and make suggestions for accurate measurement methods for the sensitivity of nano-sized explosives. Recommendations have been suggested to develop new tests that only rely on the interactions between the particles making them applicable to conduct the sensitivity tests for submicron/nano-sized energetic materials. A friction initiation setup as developed at TNO more than 30 years ago, might be a technique that could provide a more reliable measurement of the friction sensitivity of submicron- or nano-sized energetic materials by allowing only the frictional heating between the sample particles and exclude any other sources of frictional heating, allowing more reliable results.","crystallization; process intensification; electrospray; plasma; rdx; niflumic acid; electric field; polymorphism; sensitivity tests","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:6330692c-b6cd-428b-aa49-80171683749b","http://resolver.tudelft.nl/uuid:6330692c-b6cd-428b-aa49-80171683749b","Two-Dimensional Fourier Cosine Series Expansion Method for Pricing Financial Options","Ruijter, M.J.; Oosterlee, C.W.","","2012","The COS method for pricing European and Bermudan options with one underlying asset was developed in [F. Fang and C. W. Oosterlee, SIAM J. Sci. Comput., 31 (2008), pp. 826--848] and [F. Fang and C. W. Oosterlee, Numer. Math., 114 (2009), pp. 27--62]. In this paper, we extend the method to higher dimensions, with a multidimensional asset price process. The algorithm can be applied to, for example, pricing two-color rainbow options but also to pricing under the popular Heston stochastic volatility model. For smooth density functions, the resulting method converges exponentially in the number of terms in the Fourier cosine series summations; otherwise we achieve algebraic convergence. The use of an FFT algorithm, for asset prices modeled by Lévy processes, makes the algorithm highly efficient. We perform extensive numerical experiments.","Fourier cosine expansion method; European and Bermudan options; two-color rainbow options; basket options; Lévy process; Heston dynamics","en","journal article","Society for Industrial and Applied Mathematics (SIAM)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:14c62502-83e4-495c-ad62-6ed8d7b2e3c6","http://resolver.tudelft.nl/uuid:14c62502-83e4-495c-ad62-6ed8d7b2e3c6","Substrate temperature and electron fluence effects on metallic films created by electron beam induced deposition","Rosenberg, S.G.; Landheer, K.; Hagen, C.W.; Fairbrother, D.H.","","2012","Using three different precursors [MeCpPtMe3, Pt(PF3)4, and W(CO)6], an ultra-high vacuum surface science approach has been used to identify and rationalize the effects of substrate temperature and electron fluence on the chemical composition and bonding in films created by electron beam induced deposition (EBID). X-ray photoelectron spectroscopy data indicate that the influence of these two processing variables on film properties is determined by the decomposition mechanism of the precursor. For precursors such as MeCpPtMe3 that decompose during EBID without forming a stable intermediate, the film's chemical composition is independent of substrate temperature or electron fluence. In contrast, for Pt(PF3)4 and W(CO)6, the initial electron stimulated deposition event in EBID creates surface bound intermediates Pt(PF3)3 and partially decarbonylated Wx(CO)y species, respectively. These intermediates can react subsequently by either thermal or electron stimulated processes. Consequently, the chemical composition of EBID films created from either Pt(PF3)4 or W(CO)6 is influenced by both the substrate temperature and the electron fluence. Higher substrate temperatures promote the ejection of intact PF3 and CO ligands from Pt(PF3)3 and Wx(CO)y intermediates, respectively, improving the film's metal content. However, reactions of Pt(PF3)3 and Wx(CO)y intermediates with electrons involve ligand decomposition, increasing the irreversibly bound phosphorous content in films created from Pt(PF3)4 and the degree of tungsten oxidation in films created from W(CO)6. Independent of temperature effects on chemical composition, elevated substrate temperatures (>25?°C) increased the degree of metallic character within EBID deposits created from MeCpPtMe3 and Pt(PF3)4.","bonding processes; chemical analysis; electron stimulated desorption; metallic thin films; oxidation; photoelectron spectra","en","journal article","American Vacuum Society","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:2e97d85b-7779-4391-af3a-8916e9aecdae","http://resolver.tudelft.nl/uuid:2e97d85b-7779-4391-af3a-8916e9aecdae","Superconducting detector dynamics studied by quantum pump-probe spectroscopy","Heeres, R.W.; Zwiller, V.","","2012","We explore the dynamics of superconducting single-photon detectors (SSPDs) on the picosecond time-scale using a correlated photon-pair source based on spontaneous parametric downconversion (SPDC), corresponding to a pump-probe experiment at the single-photon level. We show that the detector can operate in a regime where the two-photon detection probability is orders of magnitude larger than the single-photon detection probability. The characteristic relaxation time-scale of the out-of-equilibrium hot-spot is found to be ? 15?ps. Our measurement technique is an effective tool to study fast two-photon processes, without requiring a power-dependence measurements to determine the number of photons involved.","superconducting photodetectors; two-photon processes","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","Quantum Nanoscience","","","",""
"uuid:65483e5b-d171-4c8f-88f5-5347dad5abea","http://resolver.tudelft.nl/uuid:65483e5b-d171-4c8f-88f5-5347dad5abea","Data matching for free-surface multiple attenuation by multidimensional deconvolution","Van der Neut, J.R.; Frijlink, M.; Borselen, R.","","2012","A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.","image processing; controlled source seismology; wave propagation","en","journal article","Wiley","","","","","","","2013-04-10","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:8ebb71c9-6683-496f-9d2a-0a4283351277","http://resolver.tudelft.nl/uuid:8ebb71c9-6683-496f-9d2a-0a4283351277","Efficient Pricing of Early: Exercise and Exotic Options Based on Fourier Cosine Expansions","Zhang, B.","Oosterlee, C.W. (promotor)","2012","In the financial world, two tasks are of prime importance: model calibration and portfolio hedging. For both tasks, efficient option pricing is necessary, particularly for the calibration where many options with different strike prices and different maturities need to be priced at the same time. Therefore, a fast yet accurate pricing method is a necessity for banks and trading companies. Nowadays three groups of pricing methods are being used in the financial industry and academia, that is, Monte--Carlo methods, partial (integro-)differential equation (PIDE) methods, and numerical integration methods, where the option price is modeled as the discounted expected value of the payoff at maturity. The latter type of methods is attractive from both practice and research point of view, as the fast computational speed, especially for plain vanilla options, makes it useful for calibration at financial institutions. Usually numerical integration techniques are combined with the Fast Fourier transform or Hilbert transform, and therefore, the numerical integration methods are often referred to as the `transform methods'. Representatives of transform methods are the Carr--Madan method (Carr, Madan, 1999), the CONV method (Lord et.al. 2008) and the Hilbert transform method (Feng, Linetsky, 2008). A recent contribution to the transform method category is the COS method proposed in Fang, Oosterlee (2008, 2009), that is, an option pricing method based on the Fourier cosine expansions. It departs from a truncated risk--neutral formula, in which the conditional density function is recovered in terms of its characteristic function, by Fourier cosine expansions. This method can be used for asset processes as long as the characteristic function of the conditional density function is known, or can be approximated. For processes where the density function and its derivatives are continuous functions with respect to the underlying asset, the COS method exhibits an exponential convergence rate. Our research work is based on the COS method, which has been used for vanilla European option pricing (Fang Oosterlee, 2008), vanilla early--exercise option pricing and barrier option pricing (Fang, Oosterlee, 2009). The motivation of this thesis is to further improve the robustness of the COS method, make it efficient for non--Levy models, and extend it to different types of exotic options. The point of departure of this thesis is to improve the robustness of the COS method for call option pricing with early-exercise features, as presented in Chapter 1, where the call option prices are obtained from put option prices, in combination with the put--call parity and put--call duality relations, which are incorporated into our pricing algorithm at each early--exercise date to recover the Fourier coefficients and to compute the continuation value. The robustness of the pricing methods is demonstrated by error analysis, as well as by a series of numerical examples. In Chapter 2, the acceleration of option pricing by the COS method on the Graphics Processing Unit (GPU) is presented. After a brief discussion of the GPU and its potential for option pricing, we will study different ways of GPU implementation, followed by three examples of GPU acceleration, the so-called multiple strike option pricing, option pricing under hybrid models where the characteristic function is derived from a Riccati ODE system, and the example of Bermudan option pricing. Influence of data transfer between host and device is also discussed in this chapter. Extension of COS method to early--exercise option pricing with an Ornstein--Uhlenbeck (OU) process is explained in Chapter 3. OU processes for commodity derivatives, either with or without seasonality functions, are non--Levy processes and more computationally expensive within the COS framework, as compared to Levy processes. First of all, an accurate pricing algorithm is given, which can be used for all OU processes with different types of seasonality functions. Then, based on a detailed error analysis, a more efficient pricing method is proposed, which reduces the computing time from seconds to milliseconds. However, this new method is not advocated for all parameter settings. The conditions under which the basis point accuracy can be ensured is derived by error analysis. In the numerical part, the accuracy and efficiency of these two pricing methods are compared, and the conditions we derived from error analysis are further verified by several numerical experiments. In Chapter 4, we present an efficient pricing method for American--style swing options, based on Fourier cosine expansions. Here we assume that the holder of the swing option has the right, but not the obligation, to buy or sell a certain amount of commodity, such as gas and electricity, at any time before the expiry of the option, and more than once. Moreover, a recovery time is added between two consecutive exercises in which exercise is not allowed. Our pricing method is based on the Bellman principle, leading to a backward recursion procedure in which the optimal exercise regions are determined at each time step, after which the Fourier coefficients can be recovered recursively. Our method performs well for different underlying processes, different swing contracts and different types of recovery time. The pricing methods for European and early--exercise Asian options (ASCOS) are shown respectively in Chapters 5 and 6. In Chapter 5, we present an efficient option pricing method for Asian options written on different types of averaged asset prices, but without early--exercise features. In our method, the characteristic function of the average asset is recursively recovered, with the help of Fourier expansions and Clenshaw--Curtis quadrature. Then it is used in the risk--neutral formula to get the Asian option price. Exponential convergence rate is observed for most Levy processes, which is also supported by a detailed error analysis. Advantages of our pricing algorithm are that as the number of monitoring dates increases, the method stays robust and the computing time does not increase significantly, as shown in the numerical results. Our pricing method for early--exercise Asian options is presented in Chapter 6. In this case, the Fourier cosine coefficients of the option price are recursively recovered by Fourier transform and Clenshaw--Curtis quadrature. Then these coefficients are inserted into the risk--neutral formula, which, in the early--exercise Asian case, is a two--dimensional integration, to get the option value. The chain rule from probability theory is also needed in our algorithm to factorize the joint conditional density functions. An exponential convergence rate in the option price, as derived in a detailed error analysis, is observed from various numerical experiments. Factors of approximately hundred of speedup are achieved on the GPU. Conclusions and insight into future research are to be found in Chapter 7. In this thesis, efficient pricing methods for different early--exercise and exotic options, based on the Fourier cosine expansions, are presented, followed by an error analysis and numerical results, from which we see that the COS method is an efficient, robust and flexible method for pricing different types of option products, for different asset models, and is suitable for GPU acceleration. It is a promising tool for financial calibration and dynamic hedging in practice.","option pricing; Fourier cosine expansions; swing options; Asian options; Put--call parity and duality; Ornstein--Uhlenbeck processes; Levy processes; graphics processing unit","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","","",""
"uuid:210d4da5-c224-4217-8545-7f3f0d7418bb","http://resolver.tudelft.nl/uuid:210d4da5-c224-4217-8545-7f3f0d7418bb","Process-based modeling of coastal dune development","Muller, M.C.; Roelvink, D.; Luijendijk, A.P.; De Vries, S.; Van Thiel de Vries, J.S.M.","","2012","In this paper, the aeolian transport model DUNE (Sauermann et al., 2001, Kroy et al., 2002) that describes important features and dynamics of typical desert dunes, is extended such that it can be applied in sandy coastal areas. Initial tests explore the limitations of the model in coastal areas after which adaptations are proposed and implemented. The final model version is applied to a coastal profile near Vlugtenburg (Dutch Holland coast).","aeolian transport; dunes; nourishments; coastal morphology; process-based modeling","en","conference paper","Coastal Engineering Research Council","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","http://resolver.tudelft.nl/uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","Sequential robust optimization of a V-bending process using numerical simulations","Wiebenga, J.H.; Van den Boorgaard, A.H.; Klaseboer, G.","","2012","The coupling of finite element simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a generally applicable strategy for modeling and efficiently solving robust optimization problems based on time consuming simulations. Noise variables and their effect on the responses are taken into account explicitly. The robust optimization strategy consists of four main stages: modeling, sensitivity analysis, robust optimization and sequential robust optimization. Use is made of a metamodel-based optimization approach to couple the computationally expensive finite element simulations with the robust optimization procedure. The initial metamodel approximation will only serve to find a first estimate of the robust optimum. Sequential optimization steps are subsequently applied to efficiently increase the accuracy of the response prediction at regions of interest containing the optimal robust design. The applicability of the proposed robust optimization strategy is demonstrated by the sequential robust optimization of an analytical test function and an industrial V-bending process. For the industrial application, several production trial runs have been performed to investigate and validate the robustness of the production process. For both applications, it is shown that the robust optimization strategy accounts for the effect of different sources of uncertainty onto the process responses in a very efficient manner. Moreover, application of the methodology to the industrial V-bending process results in valuable process insights and an improved robust process design.","metal forming processes; finite element method; optimization; uncertainty; robustness; sequential optimization","en","journal article","Springer-Verlag","","","","","","","","Mechanical, Maritime and Materials Engineering","Materials Innovation Institute","","","",""
"uuid:4b10a419-4f8c-43b8-a422-c3156db5c400","http://resolver.tudelft.nl/uuid:4b10a419-4f8c-43b8-a422-c3156db5c400","Bankruptcy by catastrophes for major multi-nationals: Stock exchange sensitivity for three catastrophes","Van Gulijk, C.; Ale, B.J.M.","","2012","This paper investigates the effect of major catastrophes have on stock exchange values for the major multi-nationals. The paper demonstrates that the Sharpe analysis is more sensitive in identifying effects than just following the daily stock values for assessing market response. It was found that major multi-nationals are capable of absorbing incredible amounts of financial damage following from catastrophes before stock markets react. This is partly due to the complexity of modern financial market risks that can be sold or transferred easily from the operative entity to another entity. The findings suggest that Hudson’s (2007) HSE culture ladder requires a step below the pathological to reflect the reaction of the stock exchange market on major catastrophes: the indifferent level. If the financial risks of catastrophes are covered, market traders rarely assign further consequences for the loss of life to the company through the lowering stock prices. Despite that, there may be a threshold value for financial loss that could bring major multi-nationals to bankruptcy due to market capital loss.","chemical and processing industry; enterprise risk management; catastrophes","en","conference paper","Curran Associates","","","","","","","","Technology, Policy and Management","Values and Technology","","","",""
"uuid:827ee52f-b0cc-40b2-90c0-00e3c57078d1","http://resolver.tudelft.nl/uuid:827ee52f-b0cc-40b2-90c0-00e3c57078d1","Designing socio-technical systems: Structures and processes","Bots, P.W.G.; Van Daalen, C.","","2012","The Systems Engineering, Policy Analysis and Management (SEPAM) MSc curriculum taught at Delft University of Technology focuses on the design of socio-technical systems (STS). We teach our students to structure design activities by considering what we call the TIP aspects: Technical systems, Institutions, and decision-making Processes. Students find TIP design difficult, not only because STS design is complex (this difficulty can be overcome with time and practice), but also because the I and P concepts seem difficult to grasp. Our students struggle with the notion of institution, the lack of a general framework for institutional design, and the fluidity and ambiguity of the concept of a decision-making process in the context of STS design. The objective of this paper is to clarify the TIP elements and to refine the TIP way of thinking. In clarifying the elements, we make a distinction between structure and process. Our premise is that an engineered artifact is a structure that, together with the context in which it is implemented, produces a process that performs the intended function. Our distinction between structure and process shows why the acronym TIP design is somewhat misleading. The T and I refer to structures, while the P refers to processes. This paper adds to the TIP design way of thinking by showing the analogies between technical and institutional structures. We argue that systems thinking/systems design applies to any artifact, be it technical or institutional. The structure-process distinction also allows us to better understand the system life cycle and clarify the concept of a decision-making process. Decision-making processes are important processes in all phases of a system life cycle, and they are themselves shaped by institutional structures which are placed in a context.","systems design; systems engineering; structures in systems engineering; processes in systems engineering; decision making process; institutional design","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:82b10606-2b67-4094-8143-42adca18ab53","http://resolver.tudelft.nl/uuid:82b10606-2b67-4094-8143-42adca18ab53","Adaptive Collaboration Support Systems: Designing Collaboration Support for Dynamic Environments","Janeiro, J.; Knoll, S.W.; Lukosch, S.G.; Kolfschoten, G.L.","","2012","Today, engineering systems offer a variety of local and webbased applications to support collaboration by assisting groups in structuring activities, generating and sharing data, and improving group communication. To ensure the quality of collaboration, engineering system design needs to analyze and define possible collaboration processes. Currently, engineering system design focuses on collaboration processes in a static environment. However, today’s world is characterized by dynamic environments that can influence the requirements of a collaboration process and require to adapt the process during runtime. This paper introduces a new approach for engineering systems design that provides adaptive collaboration support for changing environments. This approach is based upon a conceptual architecture for engineering systems that uses data streams to analyze the dynamic environment and adapts a collaboration process on demand according to varying goals, time and data.","Product Lifecycle Management; Collaboration; Collaboration Support Tool; Collaboration Process Design","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:179bd945-6740-46c1-bc84-7aaef2beb17e","http://resolver.tudelft.nl/uuid:179bd945-6740-46c1-bc84-7aaef2beb17e","The Power of Sponges: High-tech versus Low-tech Gaming Simulation for the Dutch railways","Meijer, S.A.","","2012","To facilitate innovation in transport systems there is a need for simulated environments to experiment with new configurations, ideas and solutions. Gaming simulation is such an environment, and this paper presents the approach applied in capacity allocation and traffic control innovation for the Dutch railways. Both hightech and low-tech games have been build and applied. This paper discusses the differences in use and value of the two types. Real-world application in seven cases has shown that the actual purpose of low-tech and high-tech games does not differ so much, but that the level of detail in the quantitative data yielded is much better in the high-tech games, as can be expected. Low-tech games however have shown a good purpose in questions where multiple types of roles had to be present. Connecting them to high-tech games requires other components that have long development times. Even though these modules are currently under construction, the low-tech games continue to fulfill their purpose due to their capability of stylized representation of systems and data.","gaming simulation; traffic control; process innovation; fidelity","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:f74accdc-d369-4360-8918-f47b9a96e54f","http://resolver.tudelft.nl/uuid:f74accdc-d369-4360-8918-f47b9a96e54f","Deciding on Innovation at a Railway Network Operator: A Grounded Theory Approach","Van den Hoogen, J.; Meijer, S.A.","","2012","Innovation at a railway network operator depends on the decision-making processes in the daily work of operational professionals and staff. This paper is about innovative measures at a railway network operator, required to increase capacity on the railway network without investing in expensive infrastructural extensions. Using field observations and open interviews, the authors found out that project managers early on in the decision-making process limit their design space. The range of alternatives under study is limited to decrease the technical and social complexity. By doing so, project managers are able to realize a phased and sequential decision making process that leads to a working proof-of-concept. The solutions are only valid under highly restrictive assumptions. The uncertainty about the value of a solution once implemented in the total railway system therefore remains high and many innovation processes therefore end with the proof-of-concept. This studies’ contribution to existing theory is the provision of an alternate explanation for the rigidity of railway systems and network-based infrastructures in general. Rather than incremental innovations as a result of political decision making in a network of interdependent actors with conflicting incentives, incremental innovations can also be a result of a more sequential and phased decision making processes as project managers purposefully decrease the technical and social complexity beforehand.","decision making; process innovation; railway systems","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:7bb6b44b-4382-4018-9cd2-da6e4a3582cb","http://resolver.tudelft.nl/uuid:7bb6b44b-4382-4018-9cd2-da6e4a3582cb","Monitoring and Characterization of Crystal Nucleation and Growth during Batch Crystallization","Kadam, S.S.","Stankiewicz, A.I. (promotor)","2012","Batch crystallization is commonly used in pharmaceutical, agrochemical, specialty and fine chemicals industry. The advantages of batch crystallization lie in its ease of operation and the relatively simple equipment that can be used. On the other hand a major disadvantage associated with it is the inconsistent and usually poor product quality. Quality of the crystalline product, which is defined in terms of the Crystal Size Distribution (CSD), purity, kind of solid state etc., is related to its performance when used as an ingredient during subsequent processes. Also the quality of the product from batch crystallization process has a strong influence on the efficiency of downstream operations like filtration and drying. Hence it is essential to reduce the batch-to-batch variations in the product quality. In this thesis three basic requirements for achieving consistent product quality have been identified. These requirements are a.) strong domain knowledge, b.) proper means of characterizing crystallization phenomena, c.) adequate process monitoring capabilities. The results presented in this thesis help in meeting the above requirements and are summarized below. a. Crystallization domain knowledge: Three important results related to the Metastable Zone Width (MSZW) have been obtained in this thesis which cannot be explained by the conventional understanding. It has been shown in this thesis that i. MSZW is not a deterministic property ii. MSZW is volume dependent iii. There exists a relationship between MSZWs measured at different volumes under similar conditions. The MSZW measurements at small volumes of 1 mL show large variations while the variations in the measurements reduce as the volume is increased. The extent of variations in the MSZW measurements at a particular volume changes from one model system to the other. The smallest measured MSZW at all volumes between 1 mililitre and 1 litre is the same. The dramatic deviation from the conventional understanding of the measured MSZWs is a result of inadequate understanding of the nucleation process. Conventionally, a multiple nuclei mechanism is assumed in which large number of nuclei are born together in a very short time interval. However in this thesis evidence is presented for a mechanism in which only a single nucleus is formed initially in a supersaturated solution which grows into a single crystal. After growth to a certain size, this single crystal undergoes extensive secondary nucleation which results into multiple crystalline fragments. The newly postulated mechanism is called the Single Nucleus Mechanism. All the crystals produced in an unseeded batch crystallization therefore originate from a single primary nucleus by secondary nucleation. This indicates that during an unseeded industrial batch crystallization process, there will be different generations of crystals present. Hence, in order to achieve crystals with desirable quality, control strategies must be focused on controlling both primary and secondary nucleation. b. Crystallization characterization: In this thesis novel methods to characterize crystal nucleation, growth and MSZW have been developed. The characterization of crystal nucleation and MSZW is done with the help of a stochastic model developed based on the Single Nucleus Mechanism. The stochastic model indicates that the nucleation rate is several orders of magnitude smaller than that postulated by the Classical Nucleation Theory. The low nucleation rate leads to the stochastic MSZWs. Unlike the conventional population balance model which shows that the MSZW is independent of volume, the stochastic model indicates that the MSZW is a function of volume. The stochastic model also enables scale dependent study of the MSZW. The characterization of crystal growth is performed by the combination of information from both the concentration measurement sensor and the crystal size distribution (CSD) measurement sensor. It is shown that by combining of the concentration and CSD measurements a better parameter estimation and better process description could be achieved. c. Crystallization process monitoring: In this thesis in situ measurement of several process variables has been successfully demonstrated not only at lab scale but also at industrial scale. A comparison has been performed between two spectroscopy based techniques viz. attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR) and Fourier transform near infrared spectroscopy (FT-NIR) for in situ concentration monitoring during crystallization at lab scale. Based on the comparison, ATR-FTIR is found to be more accurate than FT-NIR for different model systems. In spite of accurate concentration monitoring at lab scale, the concentration monitoring with ATR-FTIR leads to biased measurements at industrial scale due to the differences in the curvature of fiber optics. To facilitate the in situ concentration measurements in industrial environment, two calibration procedures have been investigated which circumvent problems associated with calibration transfer from lab to industrial scale. In the first procedure data from a cheap ultrasound based concentration probe is combined with the spectra from ATR-FTIR spectroscope. It is shown that this combination of data enables a rapid calibration of ATR-FTIR at industrial scale. In the second procedure, multiple Process Analytical Technology (PAT) tools that were arranged in a measurement skid were calibrated simultaneously at industrial scale. The skid configuration of the PAT tools allows for the combination of the calibration procedure with process characterization. The monitoring of the process at industrial scale with multiple sensors brings new process insights which can lead to better process control and optimization strategies. The results presented in this thesis will enable achievement of consistent product quality by facilitating efficient process and equipment design, process development, and process control.","Batch cooling crystallization; Process Analytical Technology; Crystal Nucleation; Single Nucleus Mechanism","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:7c385877-2287-4790-b78a-b94fd39f1434","http://resolver.tudelft.nl/uuid:7c385877-2287-4790-b78a-b94fd39f1434","An approach towards generic coastal geomorphological modelling with applications","Ye, Q.","Roelvink, J.A. (promotor)","2012","This thesis presents the development a generic morphological model for both structured and unstructured grid and the extension to a biogeomorphological model. For the morphological model, numerical algorithms are adjusted to adapt unstructured grid and are validated against analytical solutions and flume experiments. For the bio-geomorphological extension, relevant ecological processes are coupled with morphodynamic processes at various scales and are validated against the field data in Lake Veluwe. Capability of the model has been explored for applications of two salt marsh restoration cases in United States and the large scale morphodynamics of shoreface connected radial sand ridges located in South-east China Sea. Validations and applications show that this modelling platform is capable to be a multidiscipline research tool for morphologists and ecologists / biologists.","process-based morphology modelling; biogeomorphology; unstructured grid","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:f71fabea-424b-42e5-93aa-701715eed17d","http://resolver.tudelft.nl/uuid:f71fabea-424b-42e5-93aa-701715eed17d","Ground movements generated by sequential twin-tunnelling in over-consolidated clay","Divall, S.; Goodey, R.J.; Taylor, R.N.","","2012","The expansion of urban populations comes with an associated demand for increased public transport. An often utilised solution is to construct a rapid transit system within tunnels. Generally, a pair of tunnels are constructed within relative close proximity. The construction of these tunnels will generate ground movements which have the potential to cause damage to existing surface and subsurface structures. Modern tunnelling practice aims to reduce these movements to a minimum; however there is still a requirement for accurate assessment of settlements. For tunnels driven in clay, superposition of settlement predictions made by considering a single tunnel is an accepted method used to estimate movements around pairs of tunnels. This presumes that the movements generated from the construction of the second tunnel are not influenced in any way by the presence of the first tunnel. A series of plane strain centrifuge model tests have been conducted to explore the validity of superposition as a prediction method. The tests consisted of a sequential twin-tunnel construction with varied centre-to-centre spacing in over-consolidated clay. Relatively complex apparatus facilitated a predefined volume loss whilst monitoring surface settlement, tunnel support pressures and pore-water pressures. The measured data were assessed against superposition for surface vertical settlements in the plane perpendicular to an advancing tunnel face. The results highlight some inconsistencies with the superposition method.","centrifuge; tunnels and tunnelling; construction process; settlements","en","conference paper","","","","","","","","","","","","","",""
"uuid:650e7e5d-5e8f-46a9-862b-095d501f9037","http://resolver.tudelft.nl/uuid:650e7e5d-5e8f-46a9-862b-095d501f9037","Improving riverbed sediment classification using backscatter and depth residual features of multi-beam echo-sounder systems","Eleftherakis, D.; Amiri-Simkooei, A.; Snellen, M.; Simons, D.G.","","2012","Riverbed and seafloor sediment classification using acoustic remote sensing techniques is of high interest due to their high coverage capabilities at limited cost. This contribution presents the results of riverbed sediment classification using multi-beam echo-sounder data based on an empirical method. Two data sets are considered, both taken at the Waal River, namely Sint Andries and Nijmegen. This work is a follow-up to the work carried out by Amiri-Simkooei et al. [J. Acoust. Soc. Am. 126(4), 1724–1738 (2009)]. The empirical method bases the classification on features of the backscatter strength and depth residuals. A principal component analysis is used to identify the most appropriate and informative features. Clustering is then applied to the principal components resulting from this set of features to assign a sediment class to each measurement. The results show that the backscatter strength features discriminate between different classes based on the sediment properties, whereas the depth residual features discriminate classes based on riverbed forms such as the “fixed layer” (stone having riprap structure) and riverbed ripples. Combination of these two sets of features is highly recommended because they provide complementary information on both the composition and the structure of the riverbed.","acoustic wave scattering; backscatter; echo; feature extraction; principal component analysis; sediments; sonar signal processing; underwater sound","en","journal article","Acoustical Society of America","","","","","","","2012-11-30","Aerospace Engineering","Remote Sensing","","","",""
"uuid:a420a326-a751-4815-8144-136911eb5fda","http://resolver.tudelft.nl/uuid:a420a326-a751-4815-8144-136911eb5fda","Solid-phase crystallization of ultra high growth rate amorphous silicon films","Sharma, K.; Ponomarev, M.V.; Verheijen, M.A.; Kunz, O.; Tichelaar, F.D.; Van de Sanden, M.C.M.; Creatore, M.","","2012","In this paper, we report on the deposition of amorphous silicon (a-Si:H) films at ultra-high growth rate (11–60?nm/s) by means of the expanding thermal plasma technique, followed by solid-phase crystallization (SPC). Large-grain (?1.5??m) polycrystalline silicon was obtained after SPC of high growth rate (?25?nm/s) deposited a-Si:H films. The obtained results are discussed by taking into account the impact of the a-Si:H microstructure parameter R* as well as of its morphology, on the final grain size development.","crystallisation; elemental semiconductors; grain size; plasma materials processing; silicon","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","QN/Quantum Nanoscience","","","",""
"uuid:102870f0-d0d6-47d4-942a-6dc353a03633","http://resolver.tudelft.nl/uuid:102870f0-d0d6-47d4-942a-6dc353a03633","Thin film surface processing by ultrashort laser pulses (USLP)","Scorticati, D.; Skolski, J.Z.P.; Romer, G.R.B.E.; Huis in 't Veld, A.J.; Workum, M.J.; Theelen, M.J.; Zeman, M.","","2012","In this work, we studied the feasibility of surface texturing of thin molybdenum layers on a borosilicate glass substrate with Ultra-Short Laser Pulses (USLP). Large areas of regular diffraction gratings were produced consisting of Laserinduced periodic surface structures (LIPSS). A short pulsed laser source (230 fs-10 ps) was applied using a focused Gaussian beam profile (15-30 ?m). Laser parameters such as fluence, overlap (OL) and Overscans (OS), repetition frequency (100 200 kHz), wavelength (1030 nm, 515 nm and 343 nm) and polarization were varied to study the effect on periodicity, height and especially regularity of LIPSS obtained in layers of different thicknesses (150-400 nm). The aim was to produce these structures without cracking the metal layer and with as little ablation as possible. It was found that USLP are suitable to reach high power densities at the surface of the thin layers, avoiding mechanical stresses, cracking and delamination. A possible photovoltaic (PV) application could be found in texturing of thin film cells to enhance light trapping mechanisms.","Ultra Short Laser Pulses; surface processing; molybdenum; thin film; ripples; LIPSS; ps laser","en","conference paper","International Society for Optical Engineering (SPIE)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Sustainable Energy","","","",""
"uuid:3aa47d1d-11bf-4bd8-85ae-c05bacfe7c24","http://resolver.tudelft.nl/uuid:3aa47d1d-11bf-4bd8-85ae-c05bacfe7c24","Efficient pricing of Asian options under Lévy processes based on Fourier cosine expansions. Part II. Early-exercise features and GPU implementation","Zhang, B.; Van der Weide, J.A.M.; Oosterlee, C.W.","","2012","In this article, we propose an efficient pricing method for Asian options with early–exercise features. It is based on a two–dimensional integration and a backward recursion of the Fourier coefficients, in which several numerical techniques, like Fourier cosine expansions, Clenshaw–Curtis quadrature and the Fast Fourier transform (FFT) are employed. Rapid convergence of the pricing method is illustrated by an error analysis. Its performance is further demonstrated by various numerical examples, where we also show the power of an implementation on the Graphics Processing Unit (GPU).","earlyexercise Asian option; arithmetic average; Fourier cosine expansion; chain rule; ClenshawCurtis quadrature; exponential convergence; graphics processing unit (GPU) computation","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:44bd6b59-c42b-44e4-a509-ba0538b5d596","http://resolver.tudelft.nl/uuid:44bd6b59-c42b-44e4-a509-ba0538b5d596","Automatic Generation of Assembly Sequence for the Planning of Outfitting Processes in Shipbuilding","Wei, Y.","Nienhuis, U. (promotor)","2012","The most important characteristics of the outfitting processes in shipbuilding are: 1. The processes involve many interferences between yard and different subcontractors. In recent years, the use of outsourcing and subcontracting has become a widespread strategy of western shipyards. There exists not only the vertical relationship between a yard and a subcontractor but also the horizontal relationships between the subcontractors themselves. 2. They require timely and detailed engineering information. Outfitting performance depends largely on the quality, quantity, and timeliness of technical information supplied by engineering. 3. Much ambiguity and tacit knowledge exists especially in the outfitting processes. Planners quite often use rules of thumb abundantly to make important decisions and workers, who carry out daily outfitting assembly work, very largely rely on many years of experience. All these make the planning of outfitting processes, which have not been sufficiently considered in practice, a great challenge to both shipyards and subcontractors. The study presented in this thesis aims to develop an automatic sequence generation method which is able to give the yard and subcontractors a realistic and reliable outfitting plan that identifies the relationships among outfitting activities, estimates a realistic mounting time, and displays it by means of animation. First an integral planning system for the outfitting processes is proposed. It has been divided into three interrelated steps: activity generation, sequence determination and schedule determination, trying to answer questions ""What has to be accomplished?"", ""In what order will it be accomplished?"", and ""When will it be accomplished?"" respectively. The first two steps comprise process planning and the last is scheduling. The focus of the research is to determine the assembly sequence of the outfitting components, answering the first two questions. Different assembly sequence determination methods, attempted in the mechanical industry, are described and the possibility of their application in shipbuilding outfitting processes is discussed. These methods include liaison diagrams, assembly sequence diagrams, and/or graphs and the binary matrix method. The interference matrix, part of the binary matrix method, is provisionally selected to detect the relationships between outfitting components because of its practical applicability to the shipbuilding situation. Next, the kind of geometry attributes of outfitting components that are important in the determination of their mounting orders is analyzed. After field observations and thorough consultation with workers, the decisive attributes of components--position, material, weight, size, penetration and minimum work distance-- have been chosen. The way to describe these attributes mathematically with a purpose of making them programmable is introduced. Weighting coefficients are used to quantify the relative impact of the components' geometry attributes on the mounting orders. They are calculated using the Analytical Hierarchy Process (AHP) method, applied to questionnaire results. Subsequently, data collection and preparation are reported. For our own laboratory purpose, the detailed engineering CAD model, generated by TRIBON, was chosen to be the original data source. A data preparation model is presented. It is necessary to reorganize the data in a proper format so that it can be read by the sequence generation model. Apart from this, it is also able to extract the necessary information. The assembly sequence generation model is developed starting from the requirement that the sequence should follow automatically from the CAD model of a particular section or compartment in the ship. All physical attributes of a component and their relative importance in deciding its mounting order were modeled, which resulted in the derivation of finish-start relationships between components. Two types of outputs of the model lead themselves to validation: 1. The numerical results for throughput time and individual mounting times; 2. The detailed assembly sequence visualized by Gantt charts or by 3D animation. The validation study is described to investigate the correctness of the estimation of the mounting time of assembling each type of components, followed by application of the method for four test cases. The validation of the total mounting time for these four representative sections/compartments indicates, but does not prove, that the estimates for the mounting time are realistic. Also the validation work supports the conclusion that the generated sequence is realistic but not flawless. Discussion of the model's result shows that the methodology does not yet consider all technical and organizational aspects of outfitting processes at the same time. In fact, given all the complexities, it is gratifying that the method yields already useful assembly sequences that provide a basis for a better planning method. Nevertheless, before the developed method can be implemented on the worksite, additional effort will be necessary both in gathering validation data and improving the model (including the integration of steel structure). In conclusion, an automatic assembly sequence generation model for the planning of outfitting processes is developed in the thesis. It already allows generation of interference-free and integral assembly sequences including their throughput time. The behaviour of the model will further improve and become even more realistic by implementation of all three dimensions of interference-detection, an improved equipment-mounting algorithm and the integration of steel structure information.","assembly sequence; shipbuilding; outfitting processes; sequence generation; interference detection; physical attributes; mounting time","en","doctoral thesis","VSSD","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","","",""
"uuid:2cb801a8-4a7b-4bed-bec7-185314dddff1","http://resolver.tudelft.nl/uuid:2cb801a8-4a7b-4bed-bec7-185314dddff1","Theoretical underpinning and prototype implementation of scenario bundle-based logical control for simulation of human–artifact interaction","Van der Vegte, W.F.; Horváth, I.","","2012","This article presents a new methodology that enables designers to include in simulations not only the physics aspects of artifact behavior, but also human actions. The motivation for this research came from the fact that none of the conventional approaches to engineering simulations includes manipulative control of products by users as foreseen by designers. By implementing control over physics simulations, changes in parameters can be introduced that alter the course of the simulated process. As a means to do this, we propose to use scenario bundles, with which designers can operationalize their conjectures of how human users interact with products as a series of interconnected simulations. For the imaginary use process described in a scenario bundle, the designer can specify various product designs, user characteristics, and environments, which may in each case lead to different concatenations of simulation actions. The proposal facilitates the exploration of possible mismatches and anomalies in use processes. In this article, we describe the theoretical fundamentals and the overall concept of the proposed methodology, as well as its realization as a proof-of-concept implementation. This implementation can be used as a tool to specify scenario bundles and to perform controlled simulations of human–product interaction. The use of the tool is demonstrated through a practical example. Although the implementation has proven to be successful in terms of executing scenario bundles, two bottlenecks need further attention: (i) devising stable algorithms for large deformations in physical interaction simulation and (ii) incorporation of already existing algorithms for simulation of low-level human motion control.","scenario bundles; product design; virtual prototyping; simulation control; use process; human-product interaction","en","journal article","Elsevier","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:1d7076f4-74f8-497b-ab47-5e5d465f2cc8","http://resolver.tudelft.nl/uuid:1d7076f4-74f8-497b-ab47-5e5d465f2cc8","In-situ product removal by membrane extraction","Heerema, L.D.","Van der Wielen, L.A.M. (promotor); Keurentjes, J.T.F. (promotor)","2012","In bioproduction processes of chemicals and pharmaceuticals, downstream processing usually is a significant cost factor. The products require a high purity (especially biopharmaceutical products), therefore, the process usually contains a large number of separation steps. Moreover, the high costs in downstream processing are caused by the fact that the products are often produced in a dilute environment. Since high product concentrations can cause inhibition of biological growth and production, the product should be removed from the production medium at relatively low concentrations. The use of in-situ product removal (ISPR) is a useful strategy to overcome this problem. Integration of the first downstream process step with the bioreactor leads to direct removal of product during growth and production reactions, potentially increasing the productivity of the biocatalyst and thus the total yield of product. ISPR potentially decreases waste streams, fermentor volume and the stress on micro-organisms resulting from oxygen limitation and shear stress caused by the cycling of the fermentation broth. In addition, decreasing the number of steps in the downstream processing of the product potentially leads to a decrease in the total process costs and processing time. The aim of this thesis is to study the potential of integrated membrane extraction as a tool for ISPR for the removal of products from a fermentation broth. Membrane extraction (pertraction) enables a large contacting surface area between fermentation broth (aqueous phase) and solvent without the formation of an emulsion and is therefore a useful technique for ISPR. The production of phenol by Pseudomonas putida S12 was chosen as a model process to illustrate product inhibition and to demonstrate the effects of ISPR by extraction with 1-octanol. Phenol was chosen as a model component and is a typical example of a fine chemical. It serves as a good model for aromatics containing a hydroxyl group. Additionally, due to its toxicity, phenol can well illustrate the effects of product inhibition. An experimental study to illustrate product inhibition of phenol on the recombinant organism Pseudomonas putida S12 is described in chapter 2. It was demonstrated that the implementation of membrane extraction does not influence growth and phenol production. When phenol is removed from the fermentation broth by pertraction, a lower maximum aqueous phenol concentration is achieved, while the total phenol production increases to 132% as compared to the fermentation without pertraction. There are indications that the volumetric productivity increases slightly in the fermentations with in-situ pertraction as compared to the reference experiments. In chapter 3, detailed calculations on the production of phenol in a conceptual process design illustrate the benefits and disadvantages of ISPR with an implemented membrane extraction unit in a bioreactor as compared to ISPR with a membrane extraction unit outside the reactor. Results show that running the fermentation process at a lower product concentration results in a more efficient substrate utilization into biomass and phenol. The disadvantage of the integrated process is the need for large distillation columns and a high energy input for solvent regeneration due to the low product concentration in the solvent and the high solvent fluxes. Economic evaluations of the two processes show that to obtain a return of investment of 15%, the product cost price of the integrated process is a factor of three lower as compared to the non-integrated process. In chapter 4 mass transfer is studied for phenol in fermentation systems and single fiber modules. Additionally, an approach is given for a novel membrane extraction module design for implementation in a large scale bioreactor by combining experimental and theoretical results. Factors that were found to influence the overall mass transfer coefficient are the membrane wall thickness, solvent (partition coefficient), sterilization and fouling (negative effect). Furthermore, bottlenecks and strategies for improvement are discussed. The integration of an extra obstacle into the reactor can give rise to several bottlenecks for both the separation process and the biological growth and production processes, mainly caused by the altered mixing pattern. In chapters 5 and 6, the use of alternative solvents consisting of polymeric micelles solubilized in water are discussed and an alternative membrane extraction process evaluation is made. The micelles are formed of poly(ethylene oxide)–poly(propylene oxide) (PEO–PPO–PEO) block copolymers, commercially known as Pluronics. Pluronics are water-soluble, nonionic macromolecular surface active agents which are environmentally mild and hardly toxic to micro-organisms. The applicability of aqueous solutions of Pluronics for the removal of phenol in a separation and regeneration process is evaluated. Experimental results show that Pluronic micelles allow extraction of phenol from aqueous solutions at 30 °C (fermentation temperature). The phenol can be released due to the transition of the Pluronic micelles into unimers with a mild temperature switch from 30 to 8 °C. Ultrafiltration membranes provide a barrier between the aqueous Pluronic stripping solution and the aqueous solution in a (bio)reactor containing the desired product. Steady state model analysis and cost estimation show that the process costs are mainly determined by the required membrane area. In chapter 7 the potential of integrated membrane extraction as an in-situ product recovery tool for the removal of products from a fermentation broth is discussed. Furthermore, improvement of the mass transfer limitation at the reactor side by a discontinuous moving membrane module is discussed. Fouling of micro-organisms and medium components at the aqueous (shell) side of the membrane has a negative effect on the overall mass transfer coefficient by increasing the boundary layer thickness at reactor side at the membrane surface. To improve the shell-side mass transfer, the turbulence at the membrane surface can be increased by the use of alternative membrane modules which cause high surface shear rates along the membrane. The novel membrane module described in this chapter shows interesting possibilities in microfiltration to improve the flux by reducing the fouling at the membrane surface. Finally, it can be concluded that integrated membrane extraction shows potential as a tool for the removal of products from a fermentation broth. The benefits of an integrated process will pay off even more for very toxic and inhibiting products that do not allow for high concentrations in the (bio)reactor. The alternative process based on Pluronic micelles can be suited for products that allow for a higher critical concentration in the (bio)reactor as compared to phenol. The resulting higher driving force for membrane extraction will result in a decrease of the overall process costs. For products with a lower solubility in water, recovery is easy after regeneration of the micellar solvent.","in-situ product removal; Membrane extraction; Fermentation; Process design; Economic evaluation; Mass transfer; Block copolymers; Product inhibition; Fouling","en","doctoral thesis","Ipskamp B.V.","","","","","","","","Applied Sciences","Biotechnology","","","",""
"uuid:e16da59f-6dd0-44c8-bcba-20328f67d4c2","http://resolver.tudelft.nl/uuid:e16da59f-6dd0-44c8-bcba-20328f67d4c2","Computational Biology in Clinical Proteomics and Chromatin Genomics","Meuleman, W.","Reinders, M.J.T. (promotor); Van Steensel, B. (promotor)","2012","The work in this thesis is concerned with two very distinct biological fields. The first part pertains to the development of techniques to aid in the search for clinical biomarkers for use in the early detection of cancer. The second part aims to elucidate in what way a genome is organised in a cell nucleus and the functional consequences of this organisation. Part I: Clinical Proteomics. Cancer is a leading cause of death world-wide. The success of treatment is directly correlated with the stage of tumour progression. Therefore, it is of great importance to detect the occurrence of cancer as early as possible. Already for a long time, it is believed that the presence of a tumour has consequences for the repertoire of proteins and fragments thereof, i.e. peptides, present in the blood circulation. It has been proposed to use mass spectrometry to analyse the proteomic content of blood samples. Ultimately, such an approach would be used in routine population screening efforts, with the great advantage that the technique is largely non-invasive, as opposed to taking biopsies. After analysing samples using mass spectrometers, computational methods can be used to identify which peptides are predictive for a certain disease status. Such peptides are referred to as biomarkers. In Part I of this thesis, we describe work concerning the development of computational methods for processing mass spectrometry data with the goal of identifying such biomarkers. The first step in a mass spectrometry data analysis project is commonly the normalisation of data. Typically, raw same-sample mass spectra are not very comparable, due to high levels of inter-spectra variation. For this reason, spectra are normalised in an attempt to reduce this variance. We have conducted a comprehensive comparison of various normalisation methods, which are described in this thesis. We demonstrate that the method used by the majority of users performs very poorly, and advise on several methods that improve the performance significantly. After normalisation, spectral peaks representing the presence of peptides can be identified. In this thesis, we propose a method for doing so using multiple intermediate measurements that are normally discarded. We show that this approach outperforms existing methods and allows one to attach significance levels to detected peaks. Part II: Chromatin Genomics. All organisms are made up of cells; each cell containing an exact copy of the genome (i.e. the full collection of DNA). A large subgroup of organisms has their DNA contained in a separate compartment within the cell, called the nucleus. This subgroup of organisms, including animals like ourselves, is collectively referred to as eukaryotes. The diameter of a single human cell nucleus is about 6 micrometre, while the total length of all DNA contained in it is approximately 2 metres. This poses two interesting main questions. The first one is concerned with how this large amount of DNA is stored in such a confined space. Indeed, the three-dimensional organisation of chromosomes within the nucleus is largely unknown. The nucleus has a membrane separating it from the rest of the cell. The inside of this membrane is lined with a network of proteins collectively referred to as the nuclear lamina. In this thesis we present high-resolution maps of the interaction of human and mouse genomes with this nuclear lamina. We find that mammalian chromosomes are organised by way of large Lamina Associated Domains (LADs). In this way, we provide a detailed view of the spatial organisation of interphase chromosomes. The second main question is to do with the consequences of this organisation for the function of a cell. We find that during cell differentiation chromosomes are substantially refolded. In fact, hundreds of genes either migrate away from or towards the nuclear lamina during this process. We show that these genes change their activity upon relocalisation; genes that move towards the nuclear lamina are turned off, while genes that are removed from the lamina become more active. Despite this, we find that most of the spatial chromosome organisation is identical across all cell types we studied. We propose that these static regions collectively form a basal chromosome architecture and find that it is extremely well conserved between mouse and human, even though these species are separated in evolution by more than 75 million years. Using sequence analyses, we demonstrate that the basal chromosome architecture is largely encoded in the underlying genomic sequence. We further provide evidence that this genomic sequence alone is enough to tether specific regions to the nuclear lamina. Taken together, we show that mammalian genomes are organised in the cell nucleus by large regions contacting the nuclear lamina, which are largely static across cell types as well as between species. We further provide a potential mechanistic explanation in which the association of loci with the nuclear lamina is directly encoded in the genomic sequence.","Computational Biology; Clinical Proteomics; Chromatin Genomics; nuclear lamina; DamID; microarray; sequence analysis; mass spectrometry; mass spectra; gene regulation; normalisation; normalization; pre-processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:ccd4f3c8-2c1a-4a15-9a43-4464985f15fa","http://resolver.tudelft.nl/uuid:ccd4f3c8-2c1a-4a15-9a43-4464985f15fa","Development of NbTiN-Al direct antenna coupled kinetic inductance detectors","Lankwarden, Y.J.Y.; Endo, A.; Baselmans, J.J.A.; Bruijn, M.P.","","2012","We have developed a coplanar waveguide (CPW) Kinetic Inductance Detector consisting of Al and NbTiN, coupled at its shorted end to a planar antenna. To suppress the odd mode due to direct coupling to sky radiation by the KID we have also developed freestanding metal air bridges","lLow temperature detectors · kinetic inductance detectors · process development · air bridges","en","journal article","Springer","","","","","","","","Applied Sciences","Kavli Institute of NanoScience","","","",""
"uuid:9732f32e-7376-4f52-abe4-b191da116125","http://resolver.tudelft.nl/uuid:9732f32e-7376-4f52-abe4-b191da116125","Can enterprise architectures reduce failure in development projects?","Janssen, M.F.W.H.A.; Klievink, B.","","2012","Purpose: Scant attention has been given to the role of enterprise architecture (EA) in relationship to risk management in information system development projects. Even less attention has been given to the inter-organizational setting. The aim of this paper is to better understand this relationship. Design/methodology/approach: The relationship between EA and project failure/success is investigated by – through a workshop – creating a retrospective view on the use of architectures in large and complex ICT-projects. Findings: Failure factors can be grouped in organization network, people, process, product and technology categories. The findings show that a disappointingly limited number of public sector development projects make sufficient use of architecture as a risk management instrument. Architectures should be considered both as a risk-mitigating instrument and as an organizational shaping routine to reduce project failure and manage risk in organization networks. Research limitations/implications: A single workshop with a limited number of participants was conducted. The findings need further refinement and generalization based on more empirical research investigating the relationship between architecture and project failure. Practical implications: Architecture should give explicit consideration to risk management and help to draw attention to this. Governance mechanisms need be defined to ensure that the organizations’ members become aware of both architecture and risk management. Risk management and EA have similarities, as they are both an instrument and an organizational shaping routine. Originality/value: Governments collaborate more and more in organizational networks and for that reason often multiple organizations are involved in information system project developments. Enterprise architecture as a risk mitigation instrument has not, to date, been given attention. Paper type: Research paper","information systems; risk management; organizational processes; project failure; critical failure factors; enterprise architecture; information architecture","en","journal article","Emerald Group Publishing Limited","","","","","","","","Technology, Policy and Management","Infrastructure Systems & Services","","","",""
"uuid:45794518-e867-43a8-9780-477054774b09","http://resolver.tudelft.nl/uuid:45794518-e867-43a8-9780-477054774b09","THE REDESIGN OF THE PROCESS CONTROL OF CONCRETE SYSTEMS LTD FTY FOR A MORE EFFECTIVE AND EFFICIENT PRODUCTION PROCESS","Veeke, H.P.M.","Lodewijks, G. (advisor)","2012","","Process control; proper model; discrete simulation","","conference paper","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Transport Engineering and Logistics","","",""
"uuid:73398a15-427e-4c35-af07-4397ffa348b9","http://resolver.tudelft.nl/uuid:73398a15-427e-4c35-af07-4397ffa348b9","Experiences with process interaction based simulation in education and research","Veeke, H.P.M.","Lodewijks, G. (advisor)","2012","","simulation; process interaction; education; research","","conference paper","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Transport Engineering and Logistics","","",""
"uuid:74ab3e21-afd4-49d8-8a0f-a6a379375a0c","http://resolver.tudelft.nl/uuid:74ab3e21-afd4-49d8-8a0f-a6a379375a0c","Implementing Participatory Water Management: Recent Advances in Theory, Practice, and Evaluation","Von Korff, Y.; Daniell, K.A.; Moellenkamp, S.; Bots, P.W.G.; Bijlsma, R.M.","","2012","Many current water planning and management problems are riddled with high levels of complexity, uncertainty, and conflict, so-called “messes” or “wicked problems.” The realization that there is a need to consider a wide variety of values, knowledge, and perspectives in a collaborative decision making process has led to a multitude of new methods and processes being proposed to aid water planning and management, which include participatory forms of modeling, planning, and decision aiding processes. However, despite extensive scientific discussions, scholars have largely been unable to provide satisfactory responses to two pivotal questions: (1) What are the benefits of using participatory approaches?; (2) How exactly should these approaches be implemented in complex social-ecological settings to realize these potential benefits? In the study of developing social-ecological system sustainability, the first two questions lead to a third one that extends beyond the one-time application of participatory approaches for water management: (3) How can participatory approaches be most appropriately used to encourage transition to more sustainable ecological, social, and political regimes in different cultural and spatial contexts? The answer to this question is equally open. This special feature on participatory water management attempts to propose responses to these three questions by outlining recent advances in theory, practice, and evaluation related to the implementation of participatory water management. The feature is largely based on an extensive range of case studies that have been implemented and analyzed by cross-disciplinary research teams in collaboration with practitioners, and in a number of cases in close cooperation with policy makers and other interested parties such as farmers, fishermen, environmentalists, and the wider public.","adaptive management; collaborative decision making; evaluation; interactive planning; participatory modeling; participatory research; process design; public participation; social learning; stakeholder participation; water resources management","en","journal article","Resilience Alliance","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:f1e4efe8-3491-4f87-9d8c-5108f7b5970d","http://resolver.tudelft.nl/uuid:f1e4efe8-3491-4f87-9d8c-5108f7b5970d","Learning from safety in other industries","Terwel, K.C. (TU Delft Steel & Composite Structures); Zwaard, W","","2012","The Dutch building industry has been shocked by some major structural accidents during the last 10 years with buildings during construction as well as with delivered buildings. Several initiatives were started to improve the safety. In other industries the safety awareness seemed to be more developed. In this article the Dutch building sector is compared with the aviation industry and
(chemical) process industry, to see which safety influencing factors can be improved for the building industry. It appears that the risks in relation to a building after completion are fairly low, comparable to the other industries. On the other hand the approach towards safety in the building industry is relatively undeveloped, which gives starting points for improvement.","structural safety; building process; safety in industries","en","conference paper","International Association for Bridge and Structural Engineering","","","","","Author Manuscript","","","","","Steel & Composite Structures","","",""
"uuid:b3beaec0-34a4-427c-b520-ccc3ba97eb23","http://resolver.tudelft.nl/uuid:b3beaec0-34a4-427c-b520-ccc3ba97eb23","Efficient pricing of Asian options under Lévy processes based on Fourier cosine expansions Part I: European-style products","Zhang, B.; Oosterlee, C.W.","","2011","We propose an efficient pricing method for arithmetic, and geometric, Asian options under Levy processes, based on Fourier cosine expansions and Clenshaw–Curtis quadrature. The pricing method is developed for both European–style and American–style Asian options, and for discretely and continuously monitored versions. In the present paper we focus on European–style Asian options; American-style options are treated in an accompanying part II of this paper. The exponential convergence rate of Fourier cosine expansions and Clenshaw–Curtis quadrature reduces the CPU time of the method to milli-seconds for geometric Asian options and a few seconds for arithmetic Asian options. The method’s accuracy is illustrated by a detailed error analysis, and by various numerical examples.","Arithmetic Asian options, Lévy processes, Fourier cosine expansions, ClenshawCurtis quadrature, exponential convergence","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:717a8a83-445a-498e-bd97-e71e52ebb34e","http://resolver.tudelft.nl/uuid:717a8a83-445a-498e-bd97-e71e52ebb34e","Conceptual framework for potential implementations of multicriteria decision making (MCDM) methods for design quality assesment","Hartpulugil, T.; Prins, M.; Gultekin, A.T.; Topcu, Y.L.","","2011","Architectural design can be considered as a process influenced by many stakeholders, each of which has different decision power. Each stakeholder might have his/her own criteria and weightings depending on his/her own perspective and role. Hence design can be seen as a multi-criteria decision making (MCDM) process. Considering architectural design, its evaluation and quality assessment within a context of MCDM is not regularly performed within building processes. The aim of the paper is to find/adapt proper methodologies of MCDM, used in other domains for assessment of design quality, adapt them to the construction domain and test their applicability. Current tools (for instance DQI, DEEP, AEDET, HQI, LEED, BREEAM, BQA) for quality assessment will be reviewed and compared with several MCDM methods (ie. AHP, ANP, PROMETHEE, SAW AND TOPSIS). Advantages and disadvantages of gathered outcomes from comparisons for assessment and applicability within architectural design will be discussed. Finally reflections on the outcomes will be provided.","architectural design quality, analytic hierarchy process (AHP), design quality assessment tools, multi criteria decision making (MCDM)","en","conference paper","EuroFM","","","","","","","","Architecture","Real Estate & Housing","","","",""
"uuid:55b86a6e-56a9-4da1-96f1-afdd564baca4","http://resolver.tudelft.nl/uuid:55b86a6e-56a9-4da1-96f1-afdd564baca4","The structural strength of glass: Hidden damage","Veer, F.A.; Rodichev, Y.M.","","2011","We discuss “hidden damage” of glass by the rolling process, which results in heterogeneous distribution of microcracks on the edge surface of glass element, which are the fracture source deteriorating glass element strength. It is shown that removal of this damage on the edges of glass elements increases the engineering strength of float glass significantly. Using the “hidden damage” approach, we provide strength determination for the weakest specimens that is statistically relevant and is based on a reliable engineering parameter.","glass; rolling process; hidden damage; engineering parameter; microcracks","en","journal article","Springer","","","","","","","","Architecture","Building Technology","","","",""
"uuid:2d7c870a-92fd-4aec-9d14-19ab4e405bd4","http://resolver.tudelft.nl/uuid:2d7c870a-92fd-4aec-9d14-19ab4e405bd4","Feasibility study of CO2 capture by anti-sublimation","Schach, M.O.; Oyarzun, B.A.; Schramm, H.; Schneider, R.; Repke, J.U.","","2011","Processes for carbon capture and storage have the drawback of high energy demand. In this work the application of CO2 capture by anti-sublimation is analyzed. The process was simulated using Aspen Plus. Process description is accomplished by phase equilibria models which are able to reproduce the vapor-liquid and vapor-solid equilibria. Different process configurations are proposed. Total electric energy demand was defined as the evaluation criteria and the most suitable configuration was selected within technical limits. Further performance enhancement was achieved by improving the compression cooling cycles. An economic evaluation was performed for the low temperature process and the results were compared to a chemical absorption process with monoethanolamine. CO2 capture by anti-sublimation showed a better performance concerning the energy demand but with a reduced economic benefit due to higher equipment cost.","CO2 capture; anti-sublimation; simulation; process design","en","journal article","Elsevier","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:474072fe-af77-4f2e-84df-cc375d38df7f","http://resolver.tudelft.nl/uuid:474072fe-af77-4f2e-84df-cc375d38df7f","Stabilization of gravel deposits using microorganisms","Van der Star, W.R.L.; Van Wijngaarden, W.K.; Van Paassen, L.A.; Van Baalen, L.R.; Zwieten, G.","","2011","One of the techniques used for the construction of underground infrastructure is horizontal directional drilling (HDD). This trenchless method is complicated when crossing gravel deposits as a borehole in coarse gravel tends to collapse, causing the drill pipe to get stuck or the failure of installation of the product pipeline due to exceeding pull forces. In order to find a solution for the problem of borehole instability, the Biogrout process was adapted for borehole stabilization in gravel. In the Biogrout process, loose sand is converted into sandstone by injection of a dedicated mixture in the underground, which stimulates micro-organisms to catalyze chemical reactions leading to the precipitation of calcium carbonate (CaCO3) crystals. These crystals form ‘bridges’ between the grains, increasing the strength and stiffness of the material. After a first successful test on lab scale in 2008 in which gravel was cemented, a 3 m3 container was treated after which a hole successfully was drilled through it using HDD equipment. Following the success of this container test, two field applications were performed as part of the installation of two 48 inch steel gas pipelines with a length of 600 and 900 meter near Nijmegen NL. During these field applications twice a volume of 1.000 m3 gravel was stabilized in only 7 days each time using the Biogrout technique, after which a HDD was performed successfully.","Horizontal Directional Drilling (HDD); biogrout process; in situ cementation; biological methods; gravel","en","conference paper","IOS Press","","","","","","","2013-12-31","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:4a00a3a8-d4de-4318-887a-2459e3565a5b","http://resolver.tudelft.nl/uuid:4a00a3a8-d4de-4318-887a-2459e3565a5b","Supporting the Constructive Use of Existing Hydrological Models in Participatory Settings: A Set of “Rules of the Game”","Bots, P.W.G.; Bijlsma, R.; Von Korff, Y.; Van der Fluit, N.; Wolters, H.","","2011","When hydrological models are used in support of water management decisions, stakeholders often contest these models because they perceive certain aspects to be inadequately addressed. A strongly contested model may be abandoned completely, even when stakeholders could potentially agree on the validity of part of the information it can produce. The development of a new model is costly, and the results may be contested again. We consider how existing hydrological models can be used in a policy process so as to benefit from both hydrological knowledge and the perspectives and local knowledge of stakeholders. We define a code of conduct as a set of “rules of the game” that we base on a case study of developing a water management plan for a Natura 2000 site in the Netherlands. We propose general rules for agenda management and information sharing, and more specific rules for model use and option development. These rules structure the interactions among actors, help them to explicitly acknowledge uncertainties, and prevent expertise from being neglected or overlooked. We designed the rules to favor openness, protection of core stakeholder values, the use of relevant substantive knowledge, and the momentum of the process. We expect that these rules, although developed on the basis of a water-management issue, can also be applied to support the use of existing computer models in other policy domains. As rules will shape actions only when they are constantly affirmed by actors, we expect that the rules will become less useful in an “unruly” social environment where stakeholders constantly challenge the proceedings.","case study; conflict; hydrological model; institutions; Netherlands; participation; policy process; water management","en","journal article","Resilience Alliance","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:9c475d92-4149-476d-9971-0586cfccec01","http://resolver.tudelft.nl/uuid:9c475d92-4149-476d-9971-0586cfccec01","An effective release process in building and construction","Reefman, R.J.B.; Van Nederveen, G.A.","","2011","The level of failure costs in Building and Construction is still at a high level. A major cause of failure costs is the use non valid or wrong documents / models in the process. The release process is about controlling the quality of documents / models in a structured way. The major three attributes of a document / model to manage it are its identity, its version and the (maturity) status of this version. In Building and Construction processes the status of a document / model is hardly used. The article proposes a release process in the environment of an extended enterprise based on the natural principals of releasing information. This basic release process will be extended to implement concurrent engineering, the aspect of mutual involvement, in a structural way in the release process.","document / model release process; document lifecycle; document version and status","en","conference paper","Metratech","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:84bb33ba-e868-46f5-9fbc-47382b763271","http://resolver.tudelft.nl/uuid:84bb33ba-e868-46f5-9fbc-47382b763271","Success and fail factors in sustainable real estate renovation projects","Volker, L.","","2011","Sustainability remains an important issue for the construction industry. Yet, sustainable real estate developments are still considered as highly ambitious projects. To find out how and why sustainable renovation projects actually became sustainable we systematically evaluated 21 leading Dutch real estate renovation projects. In each project we interviewed the client, consultant, architect and contractor. Based on the results it was concluded that it is not necessary to have a pre-defined (sustainability) ambition in order to realize a project that can be considered sustainable in practice. Most of the respondents indicated that the ambition developed throughout the project, mainly because of the potential sustainable reputation or the parties involved in the project. Ambitions were not set as highly as expected: about half of the respondents consider preservation of the building and recycling as sustainable solutions already. The composition, management and collaboration of the construction team were found to be very important during the process. In this sense sustainable projects do not appear to be any different than regular projects, so then the only question is: Why not sustainable?","ambition; sustainability; real estate renovation, project management, process management","en","conference paper","CIB","","","","","","","","Technology, Policy and Management","Energy and Industry","","","",""
"uuid:acce69fb-86c6-403f-a972-0735df7218e9","http://resolver.tudelft.nl/uuid:acce69fb-86c6-403f-a972-0735df7218e9","Hydrodynamic erosion process of undisturbed clay","Zhao, G.; Visser, P.J.; Vrijling, J.K.","","2011","This paper describes the hydrodynamic erosion process of undisturbed clay due to the turbulent flow, based on theoretical analysis and experimental results. The undisturbed clay has the unique and complicated characteristics of cohesive force among clay particles, which are highly different from disturbed clay and non-cohesive sand. Based on momentum equilibrium, the critical incipient velocity is derived from the forces of particle weight under water, cohesive force among particles around the clay particle, uplift force and drag force. Via the turbulent boundary layer flow theory, the critical stress can be connected with these forces. The formulae for the incipient stress and the critical velocity have been calibrated and validated with the results of undisturbed clay tests. A new formula for the erosion rate is proposed. The study gives a new insight into the erosion process of undisturbed clay and the resulting sediment transport","erosion process; undisturbed clay; turbulent boundary-layer flow; incipient stress","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:10e25404-4b8e-443b-9f16-5e6e5c2a7444","http://resolver.tudelft.nl/uuid:10e25404-4b8e-443b-9f16-5e6e5c2a7444","Characterising Combustion in Diesel Engines: Using parameterised finite stage cylinder process models","Ding, Y.","Stapersma, D. (promotor); Grimmelius, H.T. (promotor)","2011","Characterising combustion of diesel engines is not only necessary when researching the instantaneous combustion phenomena but also when investigating the change of the combustion process under variable engine operating conditions. An effective way to achieve this goal is to parameterize the combustion process using a finite combustion stage cylinder process model and then the parameters can be modeled to give a global description of diesel engine combustion. The main objective of this thesis is getting information how to calculate (simulate) the parameters defining the finite stage cylinder process model using both theoretical and experimental methods. The latter is essential but also complicated.","diesel engine; combustion; heat release; Seiliger process","en","doctoral thesis","VSSD","","","","","","","","Mechanical, Maritime and Materials Engineering","Department of Maritime and Transport Technology","","","",""
"uuid:3216e2be-5e66-4993-9477-e4457bb6cedc","http://resolver.tudelft.nl/uuid:3216e2be-5e66-4993-9477-e4457bb6cedc","Forcing mechanisms of dielectric barrier discharge plasma actuators at carrier frequency of 625 Hz","Kotsonis, M.; Ghaemi, S.","","2011","The forcing behavior of a dielectric barrier discharge (DBD) actuator is investigated experimentally using a time-resolved particle image velocimetry (PIV) system in conjunction with a phase shifting technique. The spatio-temporal evolution of the induced flowfield is accurately captured within one high voltage (HV) cycle allowing the calculation of the instantaneous velocity and acceleration. Additional voltage and current measurements provide the power consumption for each case. Four different applied voltage waveform shapes are independently tested, namely, sine, square, positive sawtooth, and negative sawtooth at fixed applied voltage (10 kVpp) and carrier frequency (625 Hz). The instantaneous flowfields reveal the effect of the plasma forcing during the HV cycle. Sine waveform provides large positive forcing during the forward stroke, with minimal but still positive forcing during the backward stroke. Square waveform provides strong and concentrated positive and negative forcing at the beginning of the forward and backward stroke, respectively. Positive sawtooth provides positive but weak forcing during both strokes while the negative sawtooth case produces observable forcing only during the forward stroke. Results indicate the inherent importance of negative ions on the force production mechanisms of DBD’s. Furthermore, the revealed influence of the waveform shape on the force production can provide guidelines for the design of custom asymmetric waveforms for the improvement of the actuator’s performance.","actuators; discharges (electric); plasma devices; plasma diagnostics; plasma dielectric properties; plasma transport processes; power consumption; spatiotemporal phenomena","en","journal article","American Institute of Physics","","","","","","","","Aerospace Engineering","Aerodynamics & Wind Energy","","","",""
"uuid:18354f6f-2187-467f-ad98-bc815caa285b","http://resolver.tudelft.nl/uuid:18354f6f-2187-467f-ad98-bc815caa285b","Concrete shell structures revisited: Introducing a new 'low-tech' construction method using vacuumatics formwork","Huijben, F.; Van Herwijnen, F.; Nijsse, R.","","2011","This paper provides a new perspective on the construction process of concrete shell structures and introduces a new cost saving approach for constructing (single curved) concrete shells using Vacuumatics formwork.","vacuumatics; concrete shells; construction process; formwork system","en","conference paper","CIMNE","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","",""
"uuid:a4161ffd-7347-40b2-9773-33494e5e1ec8","http://resolver.tudelft.nl/uuid:a4161ffd-7347-40b2-9773-33494e5e1ec8","Statistical Modeling of Shape and Motion of the Wrist Bones","Van de Giessen, M.","Van Vliet, L.J. (promotor); Grimbergen, C.A. (promotor)","2011","Carpal instability occurs when the wrist bones assume a pathological posture, e.g. due to ligament rupture as a result of trauma. Ligament rupture cannot be diagnosed reliably directly, as current medical imaging modalities do not provide sufficient soft-tissue contrast (X-ray, CT) or lack a sufficiently high resolution (CT, MRI). Ligament rupture, however, affects the motion patterns of the carpal bones and thereby limits their functionality. Non-healthy deviating motions and postures can be identified by comparing possibly pathological wrist postures and motions to the healthy motion patterns. In this research a statistical motion model is constructed from 4D-RX images to capture the natural variations in motion between different wrists. Because of differences in bone shape and size between individuals and because of positioning inaccuracies between acquisitions, constructing a statistical model that is sensitive to small pathological motion deviations is not straightforward. By describing the motions of carpal bones in a local way, the statistical motion model is insensitive to global size and shape variations between wrists of different individuals and sufficiently sensitive to detect small deviating motion patterns due to ligament ruptures and also to estimate the healthy bone positions and orientations of a pathological wrist. The latter makes the developed statistical motion model a valuable tool for wrist diagnosis and surgical planning.","Image Processing; Computer Tomography; Wrist; Statistical Model; Kinematic Model; Active Shape Model; Scapholunar Dissociation; 4D-RX; Ligament","en","doctoral thesis","","","","","","","","2011-11-02","Applied Sciences","Imaging Science & Technology","","","",""
"uuid:109b1e35-fcad-488e-9388-0fe923098fca","http://resolver.tudelft.nl/uuid:109b1e35-fcad-488e-9388-0fe923098fca","Microwave Enhanced Reactive Distillation","Altman, E.","Stankiewicz, A.I. (promotor); Stefanidis, G. (promotor)","2011","The application of electromagnetic irradiation in form of microwaves (MW) has gathered the attention of the scientific community in recent years. MW used as an alternative energy source for chemical syntheses (microwave chemistry) can provide clear advantages over conventional heating methods in terms of reaction time, yield and selectivity. Several applications using this technology have been proven effective in diverse scientific fields. In this thesis, the scope of microwave chemistry was further expanded to a reactive distillation (RD) process with the primary objective to evaluate its use in view of possible process intensification (PI). The ultimate goal was to conceptually address the novel concept of a MW enhanced RD process (MWeRD) based on demonstrated effects in partial processes namely; molecular separation and chemical reaction. The thesis is divided in four main parts, each of them covering different aspects of the research. Part I, comprises Chapters 1, 2 and 3, giving the introductory guideline and the basic data neededTo proof the concept, the synthesis of n-propyl propionate was chosen as case system. The thermo-physical data required to accurately address RD design and operation, and the dielectric properties relevant for MW dielectric heating were experimentally determined. The thermodynamic behavior of the system was accurately predicted using a fitted UNIQUAC-HOC model, while experimental reaction kinetics data were used to fit parameters of a pseudo-homogenous model. Both models were used to build the residue curve maps, needed to determine process feasibility. Experiments performed in a conventionally heated pilot-scale column (DN-50) equipped with two types of structured packings (Sulzer BX and Katapak-SP 11) are reported. In addition, a non-equilibrium stage model (NEQ model) for the column was implemented. Model predictions were compared to experimental results showing good accuracy. Theoretical investigations of the most important operating parameters (total feed, molar feed ratio, reflux ratio and heat duty) and their effect on the overall process performance are presented. The fundamental research performed with MW was divided in two parts. First, the influence of MW on molecular separation of the binary mixtures composing the quaternary case system is discussed based on experimental results. Four binary pairs were studied showing, in some cases, an enhanced separation of the components. Then, the effects of MW radiation on the case reaction were studied comparing reaction conditions under MW and conventional heating using different homogenous and heterogeneous catalysts. From all the catalysts tested, Zn triflate proved to be more effective under microwave heating producing 40% more ester compared to the conventionally heated experiment. Finally, the general benefits and barriers of the technology integration are discussed based on the results of the MW enhanced reaction and separation. The novel concept of a MWeRD process is presented, giving recommendations for further research in terms of hardware, operating conditions and up-scalability of the process.","process intensification; reactive distillation; n-propyl propionate; microwaves","en","doctoral thesis","","","","","","","","2011-11-17","Mechanical, Maritime and Materials Engineering","Process & Energy","","","",""
"uuid:d2f17825-76eb-4d83-92cc-102ea3692b2b","http://resolver.tudelft.nl/uuid:d2f17825-76eb-4d83-92cc-102ea3692b2b","Land Information Management and its (3D) Database Foundation","Wammes, Han","","2011","From the inception of the Oracle Spatial Engine over ten years ago, Oracle has been striving to make spatial information an integral part of its information management architecture. The Oracle information management architecture includes such areas as GIS, Document Management and Archiving and Business Intelligence. Built initially as disparate solutions on top of the Oracle object-relational / native XML database. It became soon quit clear that taking a more holistic and standardized approach to information management, would create much more value to our customers. By managing spatial databases, document stores and data-warehouses in one database environment taking an unified approach based on open standards, would relief the integration, management and security burden of dealing with such a diversity of structured and unstructured data tremendously. Today these capabilities are an integral part of Oracle’s vision on enterprise information management. They also fit naturally in current strategies on SOA, Engineered Systems, Cloud Computing and Big Data, which require not only a unified approach to information management, but also require an unified, on open standards based, approach to process management. The current trends in the GIS domain boil down to exactly these strategies. Especially in the Land Information Management domain, many organizations are re-considering their current systems or implementing new systems if they didn’t exist before, like in developing countries, to accommodate new requirements such as open standards based, an integrated approach to managing information, interoperability between systems and support for 3D data-types in the GIS domain. e-Government initiatives and initiatives like e.g. INSPIRE require this open approach towards Land Information Management as land is probably the most important asset, humanity has, to manage our future. In this paper it will be shown how Oracle has been adopting modern technologies as part of its strategy, especially in the 3D area. It will also be shown how e.g. the LADM/STDM application scheme helps in defining Oracle’s strategy towards Land Information Management to create a more agile solution based on IT strategies in dealing with current and future requirements.","LADM/STDM; Oracle Database; Oracle Spatial; 3D data-types; Oracle Land Information Management Proposition; SOA; Cloud Computing; Interoperability; Security; Information Management; Process Management; e-Government; INSPIRE","en","conference paper","","","","","","","","","","","","","",""
"uuid:15802be4-3ed3-4cc7-971a-5178a89907da","http://resolver.tudelft.nl/uuid:15802be4-3ed3-4cc7-971a-5178a89907da","3D Cadastre Web Map: Prospects and Developments","Aditya, Trias; Iswanto, Febri; Wirawan, Ade; Laksono, Dany P.","","2011","Although 3D cadastre web map offer benefits to, among others, planning and disaster management application domains, they pose some technical constraints to be implemented using current web technologies. Some technical constraints may include field data processing, data compatibility and browser limitations. One of technical solutions in support of 3D data processing is the use of OpenGIS standard i.e. KML in streamlining 3D measurements into the existing cadastre geodatabase. Further, another standard that is useful is X3D. X3D stores and visualizes 3D objects above or below land parcels. Some prospects of the use of those open standards will be illustrated through case studies. First case study is the use of KML as an intermediate format to bridge between CAD with PostgreSQL PostGIS for mapping spacerelated rights. Second case study is the use of cadastre and environment-related data in X3D format in support of rapid mapping for Mt. Merapi post disaster assessments. From those two case studies, technical specifications for developing a solution of a 3D web map suitable for a proposed hybrid cadastre, will be discussed.","PostGIS; 3D Cadastre Web Map; 3D Data Processing; CAD; KML; X3D","en","conference paper","","","","","","","","","","","","","",""
"uuid:a783e581-bc7a-4efa-adcb-7e9201840367","http://resolver.tudelft.nl/uuid:a783e581-bc7a-4efa-adcb-7e9201840367","Managing project complexity: A study into adapting early project phases to improve project performance in large engineering projects","Bosch-Rekveldt, M.G.C.","Verbraeck, A. (promotor); Bakker, H.L.M. (promotor)","2011","Engineering projects become increasingly more complex and project complexity is assumed to be one of the causes for projects being delivered late and over budget. However, what this project complexity actually comprised of was unclear. To improve the overall project performance, this study focuses on identifying the potential causes of complexity in projects. Moreover, it is investigated how the early project phase could be adapted to the complexity of the project. The research is performed with companies of the NAP network, which brings together companies from the entire value chain in the Dutch process industry. The study is structured in four phases and included exploratory case studies, a quantitative survey, explanatory case studies and an evaluative survey. By combining qualitative and quantitative work, this study is an example of successfully applying a mixed methods approach in project management research. The main results of this study are the TOE (Technical, Organizational, External) framework to grasp project complexity and recommendations on managing project complexity in the early project phase. To improve project performance the role of integrated teams (joint owner / contractor teams) as well as thorough application of risk management is shown to be crucial.","Complexity; Project Management; Front-end development; Project Performance; Large Engineering Projects; Process Industry","en","doctoral thesis","Delft Centre for Project Management","","","","","","","2011-11-15","Technology, Policy and Management","TSE","","","",""
"uuid:299142bc-2b72-47ce-b8da-b59e8c4c2834","http://resolver.tudelft.nl/uuid:299142bc-2b72-47ce-b8da-b59e8c4c2834","Generation of degenerate, factorizable, pulsed squeezed light at telecom wavelengths","Gerrits, T.; Stevens, M.J.; Baek, B.; Calkins, B.; Lita, A.; Glancy, S.; Knill, E.; Nam, S.W.; Mirin, R.P.; Hadfield, R.H.; Bennink, R.S.; Grice, W.P.; Dorenbos, S.N.; Zijlstra, T.; Klapwijk, T.M.; Zwiller, V.","","2011","We characterize a periodically poled KTP crystal that produces an entangled, two-mode, squeezed state with orthogonal polarizations, nearly identical, factorizable frequency modes, and few photons in unwanted frequency modes. We focus the pump beam to create a nearly circular joint spectral probability distribution between the two modes. After disentangling the two modes, we observe Hong-Ou-Mandel interference with a raw (background corrected) visibility of 86% (95%) when an 8.6 nm bandwidth spectral filter is applied. We measure second order photon correlations of the entangled and disentangled squeezed states with both superconducting nanowire single-photon detectors and photon-number-resolving transition-edge sensors. Both methods agree and verify that the detected modes contain the desired photon number distributions","quantum optics; photon statistics; quantum detectors; squeezed states; quantum information and processing","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","QN/Quantum Nanoscience","","","",""
"uuid:2261aaac-afb4-48b4-a0b4-c482efab23c8","http://resolver.tudelft.nl/uuid:2261aaac-afb4-48b4-a0b4-c482efab23c8","Fractal disperse hydrogen sorption kinetics in spark discharge generated Mg/NbOx and Mg/Pd nanocomposites","Anastasopol, A.; Pfeiffer, T.V.; Schmidt-Ott, A.; Mulder, F.M.; Eijt, S.W.H.","","2011","Isothermal hydrogen desorption of spark discharge generated Mg/NbOx and Mg/Pd metal hydride nanocomposites is consistently described by a kinetic model based on multiple reaction rates, in contrast to the Johnson-Mehl-Avrami-Kolmogorov [M. Avrami, J. Phys. Chem. 9, 177 (1941); W. A. Johnson and R. F. Mehl, Trans. Am. Inst. Min., Metal. Eng. 135, 416 (1939); A. N. Kolmogorov, Izv. Akad. Nauk SSSR, Ser. Mat. 3, 355 (1937); F. Liu, F. Sommer, C. Bos, and E. J. Mittemeijer, Int. Mat. Rev. 52, 193 (2007)] model which is commonly applied to explain the kinetics of metal hydride transformations. The broad range of reaction rates arises from the disperse character of the particle size and the dendritic morphology of the samples. The model is expected to be generally applicable for metal hydrides which show a significant variation in particle sizes, in configuration and/or chemical composition of local surroundings of the reacting nanoparticles.","chemical analysis; chemical reactions; desorption; magnesium; magnesium compounds; nanocomposites; niobium compounds; palladium; particle size; plasma materials processing; sparks","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","","","","",""
"uuid:29e86b3f-fee4-41e9-9b56-cda14ef928fb","http://resolver.tudelft.nl/uuid:29e86b3f-fee4-41e9-9b56-cda14ef928fb","Thermoplastic Composite Wind Turbine Blades: Kinetics and Processability","Teuwen, J.J.E.","Beukers, A. (promotor); Bersee, H.E.N. (promotor)","2011","In previous research, the potential of glass fibre reinforced anionic polyamide-6 (APA-6) composites for use in wind turbine blades was proven. Based on polymer properties, viscosity, processing time, costs and recyclability, APA-6 composites are considered the most suitable reactive thermoplastic material candidate. However, more research is needed to mature the knowledge of the APA-6 material and its processing which can be achieved by understanding the effect of the individual steps in the manufacturing process and by studying the material behaviour in more detail. First of all, an experimental study on the effect of the individual steps in the manufacturing and post-processing process was performed to increase the homogeneity of the composites and identify the most important processing parameters. Secondly, semi-empirical models for the prediction of the reaction kinetics and rheology were built to better estimate the infusion time, start of reaction and the behaviour of the material. These models were then used to predict the heat build-up due the exothermic reaction in thick-walled composites. Based on the models for the reaction kinetics and rheology and the knowledge build from the experimental investigation, it is thought that an optimisation of the whole manufacturing process for a specific product is feasible and that the material behaviour within that process can be well predicted.","thermoplastic composites; vacuum infusion; reaction kinetics; process optimisation","en","doctoral thesis","","","","","","","","","Aerospace Engineering","Design and Production of Composite Structures","","","",""
"uuid:d4adf4ca-168b-4aa8-aa9e-2aea1c0651f1","http://resolver.tudelft.nl/uuid:d4adf4ca-168b-4aa8-aa9e-2aea1c0651f1","Making explicit in design education: Generic elements in the design process","Van Dooren, E.J.G.C.","","2011","Designing is a complex, personal, creative and openended skill. How can teachers help students in learning to design?","design process; generic elements; design education","en","conference paper","TU Delft & IASDR","","","","","","","","Architecture","Building Technology","","","",""
"uuid:01438c6e-1075-448a-959a-d8346e1a3b84","http://resolver.tudelft.nl/uuid:01438c6e-1075-448a-959a-d8346e1a3b84","Streamlining cross departmental interactions of back office processes in financial services","Rutte, C.R.","","2011","Due to the increased competitive environment in the retail banking industry, customer satisfaction and efficient back office processes have become more important. Lean Six Sigma (LSS) is a management style that is focused on improving processes and increasing customer satisfaction. Discrete-Event Simulation (DES) is a method that is able to capture dynamic processes, and supports the analysis of processes and evaluation of alternative designs. The literature of an integrated approach of LSS and DES is scant. In this article the research question is answered: “When and how can Discrete-Event Simulation and Lean Six Sigma be integrated?”. A case is researched, designing improvements to the closing process of current accounts at a retail bank, with an integrated approach of LSS and DES techniques. The case shows that DES can be properly used in LSS when a dynamic, interactive and complicated process is improved. Vice versa, LSS provides tools and dynamics that can support a DES study. These are: a strong focus on analyzing issues, involving stakeholders for generating alternative designs and the project management tools handed by LSS. Future research should focus on the value of the integrated approach for: implementing designs, the effects of animation and the use of the integrated approach with a full project team.","Reengineering; Lean Six Sigma; Discrete-Event Simulation; Business Processes Modelling; Financial services","en","journal article","Systems Engineering, Policy Analysis and Management (SEPAM), Delft University of Technology","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:acad9374-1d18-4dec-844b-79bcf13eacb2","http://resolver.tudelft.nl/uuid:acad9374-1d18-4dec-844b-79bcf13eacb2","A bound for the range of a narrow light beam in the near field","Verbeek, P.W.; Van den Berg, P.M.","","2011","We investigate the possibility of light beams that are both narrow and long range with respect to the wavelength. On the basis of spectral electromagnetic field representations, we have studied the decay of the evanescent waves, and we have obtained some bounds for the width and range of a light beam in the near-field region. The range determines the spatial bound of the near field in the direction of propagation. For a number of representative examples we found that narrow beams have a short range. Our analysis is based on the uncertainty relations between spatial position and spatial frequency.","Fourier optics and signal processing; physical optics; electromagnetic optics; wave propagation","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:3930d421-51d2-41a2-bb92-6666fe142bbc","http://resolver.tudelft.nl/uuid:3930d421-51d2-41a2-bb92-6666fe142bbc","A multidisciplinary challenge within the procedure of designing in a complex undefines domain","Shahnoori, S.; Van den Dobbelsteen, A.A.J.F.","","2011","Modern architectural design may be generally appointed as complex. Hence, designing in an undefined domain within the field of architecture (e.g. From the urban to the building and down into the materialisation) is also a risky task. Hence, in some situation, such a design has to deal with one or more severe constraints, therefore such a design environment represent an extremely busy situation. This design, such as designing for a Sustainable Reconstruction of Houses in a Seismic Desert environment (i.e. SRH-SD), may become too vague to reach any appropriate conclusions. For which systemisation is a good solution to avoid the possible complications and chaos, which has been provided in a larger frame of study (e.g. See also Shahnoori, 2008; Shahnoori, 2009; Shahnoori et al., 2010a; Shahnoori et al., 2010b, Shahnoori et al., 2011a). In this solution, the design Processes has been modelled as a system in the Glocal (Global and local) Process Model or the GPM. Each phase of the GPM has been assumed as a Subsystem. To formulate complications in critical phase, such as the Exploration Phase, the items and segments of the phases need also organisations and arrangement. Therefore, as the previous research concentrated on the first phase (i.e. the “need”); the current research focuses on the second phase of the GPM, the Exploration phase. Thus, first the importance and crucial role of the exploration phase is discussed. However, to enable the design to benefit from the valuable outcomes of this phase it need to be organised. Therefore, in the first postulation, it is a subsystem, and then it comprises its own internal environment and elements, and structure. Therefore, after discussing this and the argumentations a model of such a subsystem will be presented.","complex design situation; process modelling; sub-systemisation; exploration phase; sustainable reconstruction; seismic desert houses","en","conference paper","","","","","","","","","Architecture","Building Technology","","","",""
"uuid:5356af2b-2475-4a74-bfc1-0100fffc65e3","http://resolver.tudelft.nl/uuid:5356af2b-2475-4a74-bfc1-0100fffc65e3","Model-based Rational and Systematic Protein Purification Process Development: A Knowledge-based Approach","Kungah Nfor, B.","Van der Wielen, L.A.M. (promotor)","2011","The increasing market and regulatory (quality and safety) demands on therapeutic proteins calls for radical improvement in their manufacturing processes. Addressing these challenges requires the adoption of strategies and tools that enable faster and more efficient process development. This thesis is concerned with the development and systematic integration of state-of-the-art process development tools. A number of such tools were developed; high throughput experimentation, bio-thermodynamic models, process modeling and optimization tools and rational methodologies; and their systematic integration for in silico model-based biopharmaceutical process development was successfully demonstrated, with the following main advantages: (1) better understanding of the critical process and product quality attributes; (2) highly economical and efficient use of resources; (3) rational and fast process development; (4) ease of process validation in view of the Quality by Design (QbD) initiative. This thesis, therefore, sets the basis for knowledge-driven biopharmaceutical process development.","Biopharmaceutical Process Development; Protein Purification; Chromatography; Biothermodynamics; Process Modeling and Optimization","en","doctoral thesis","","","","","","","","","Applied Sciences","Biotechnology","","","",""
"uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","http://resolver.tudelft.nl/uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","Event-based progression detection strategies using scanning laser polarimetry images of the human retina","Vermeer, K.A.; Lo, B.; Zhou, Q.; Vos, F.M.; Vossepoel, A.M.; Lemij, H.G.","","2011","Monitoring glaucoma patients and ensuring optimal treatment requires accurate and precise detection of progression. Many glaucomatous progression detection strategies may be formulated for Scanning Laser Polarimetry (SLP) data of the local nerve fiber thickness. In this paper, several strategies, all based on repeated GDx VCC SLP measurements, are tested to identify the optimal one for clinical use. The parameters of the methods were adapted to yield a set specificity of 97.5% on real image series. For a fixed sensitivity of 90%, the minimally detectable loss was subsequently determined for both localized and diffuse loss. Due to the large size of the required data set, a previously described simulation method was used for assessing the minimally detectable loss. The optimal strategy was identified and was based on two baseline visits and two follow-up visits, requiring two-out-of-four positive tests. Its associated minimally detectable loss was 5–12?m, depending on the reproducibility of the measurements.","progression detection; simulation; glaucoma; polarimetry; optimization; image processing","en","journal article","Elsevier","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:adafbc72-36aa-447f-8da1-73648590d695","http://resolver.tudelft.nl/uuid:adafbc72-36aa-447f-8da1-73648590d695","Correlated photon-pair generation in a periodically poled MgO doped stoichiometric lithium tantalate reverse proton exchanged waveguide","Lobino, M.; Marshall, G.D.; Xiong, C.; Clark, A.S.; Bonneau, D.; Natarajan, C.M.; Tanner, M.G.; Hadfield, R.H.; Dorenbos, S.N.; Zijlstra, T.; Zwiller, V.; Marangoni, M.; Ramponi, R.; Thompson, M.G.; Eggleton, B.J.; O'Brien, J.L.","","2011","We demonstrate photon-pair generation in a reverse proton exchanged waveguide fabricated on a periodically poled magnesium doped stoichiometric lithium tantalate substrate. Detected pairs are generated via a cascaded second order nonlinear process where a pump laser at wavelength of 1.55 ?m is first doubled in frequency by second harmonic generation and subsequently downconverted around the same spectral region. Pairs are detected at a rate of 42/s with a coincidence to accidental ratio of 0.7. This cascaded pair generation process is similar to four-wave-mixing where two pump photons annihilate and create a correlated photon pair.","ion exchange; lithium compounds; magnesium compounds; multiwave mixing; optical fabrication; optical harmonic generation; optical pumping; optical waveguides; stoichiometry; two-photon processes","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","QN/Quantum Nanoscience","","","",""
"uuid:4c829c27-4d14-4731-830c-3ec40afbde76","http://resolver.tudelft.nl/uuid:4c829c27-4d14-4731-830c-3ec40afbde76","Low-complexity full-melt laser-anneal process for fabrication of low-leakage implanted ultrashallow junctions","Biasotto, C.; Gonda, V.; Nanver, L.K.; Scholtes, T.L.M.; Van der Cingel, J.; Vidal, D.; Jovanovic, V.","","2011","Good-quality ultrashallow n + p junctions are formed using 5-keV amorphizing As+ implantations followed by a single-shot excimer laser anneal for dopant activation. By using an implant that is self-aligned to the contact windows etched in an oxide isolation layer, straightforward processing of the diodes is achieved with postimplantation processing temperatures kept below 400°C. A possible source of junction leakage at the perimeter caused by dip-etch enlargement of the contact window, also confirmed by transmission electron microscopy (TEM) analysis, is identified, and diode performance is improved by increasing the junction/contact window overlap. The optimum performance in terms of low leakage, shallow junctions, and low resistivity is achieved for 30° tilted implants and by applying a thin laser-reflective aluminum layer. This work isolates the minimum requirements for achieving low-leakage diode characteristics.","excimer laser annealing; ultrashallow junctions; tilted implantations; low-temperature processing; reflective masking layer","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","","",""
"uuid:afecbacd-0ac1-4a01-b7ab-db1ba0c42f50","http://resolver.tudelft.nl/uuid:afecbacd-0ac1-4a01-b7ab-db1ba0c42f50","Model Reduction in Chemical Engineering: Case studies applied to process analysis, design and operation","Dorneanu, B.","Grievink, J. (promotor); Bildea, C.S. (promotor)","2011","During the last decades, models have become widely used for supporting a broad range of chemical engineering activities, such as product and process design and development, process monitoring and control, real time optimization of plant operation or supply chain management. Although tremendous advancements continue to take place in the development of numerical techniques and the acceleration of the computing speed, these advancements have been outpaced by the tendency to make rigorous models of much more complicated and extensive systems. Such rigorous models cannot always be effectively used for design and optimisation. A reduction of the model size and complexity is required to make a model-based solution practical. Many current numerical approaches in systems engineering apply order-reduction to a model in its entirety, without preserving the underlying network structure of the process or its multi-scale decomposition. Retaining these meaningful structural features of a process in a reduced model is a necessity for numerous applications. This is the motivation for the research and the results presented in this thesis. The novelty of this thesis is in systematizing and exploiting the essential structural features of a process in model reduction. The model reduction approach aims first at simplifying the physical and the behavioural structure, as well as the systemic level of the chemical process in the model. Only then additional mathematical and numerical (scheme) reductions are selectively applied to individual compartments or units. In the following step, the reduced models of the individual units are connected at system level and the reduced model of the full process is obtained. In this way, the model reduction procedure is able to preserve the essential structural features of the process. Moreover, the physical meaning of the variables and equations is kept as much as possible. The feasibility and the advantages of the approach are presented for two types of applications: (1) the iso-butane alkylation process, an example of a complex process with relatively simple (one-phase) products; and (2) the freezing step in ice cream manufacture, an example of a single process unit with a complex product. The model reduction procedures works well for the cases considered. The resulting models are solved in acceptable amounts of time. Moreover, they are successfully used for applications such as assessment of the plantwide control structures and the dynamic optimization of the plant operation for the iso-butane alkylation process, and the sensitivity analysis of the model’s parameters in the case of the ice cream freezing process. However, the issue of the optimality with respect to the level of the multi-scale decomposition when developing the reduced model is still open.","model reduction; process modelling; plantwide control; dynamic optimization; alkylation; ice cream freezing","en","doctoral thesis","CPI Wohrmann Print Service","","","","","","","2011-07-05","Applied Sciences","Chemical Engineering","","","",""
"uuid:4215b4d9-0f2a-44cc-b6fa-b7a66a161a75","http://resolver.tudelft.nl/uuid:4215b4d9-0f2a-44cc-b6fa-b7a66a161a75","Experimental study investigating various shoreface nourishment designs","Walstra, D.J.R.; Hoyng, C.W.; Tonnon, P.K.; Van Rijn, L.C.","","2011","This experimental study focuses on the morphological development of a near-equilibrium profile on which to types of shoreface nourishments are placed. As previous studies have indicated that the efficiency of nourishments is mainly influenced by water depth in which they are constructed, two cross-shore locations are considered for an accretive and an erosive wave condition. A nourishment relatively high in the profile covering the trough and a nourishment relatively low in the profile just seawards of the breaker bar were investigated. Detailed measurements of wave height, velocities and sediment transport are combined with the observed morphological development to identify the processes that dominate the morphological development. The results confirm that the cross-shore location of nourishment has a major influence. The nourishment in relative deep water reduces the erosion of the upper part of the profile by about 20% for the accretive condition and 40% for the erosive condition. The nourishment higher in the profile results in a reduction of the erosion volume of 60% for both wave conditions.","shoreface nourishment; physical experiment; process measurements","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:00af288f-acaf-48ec-ac90-812f1c8c4988","http://resolver.tudelft.nl/uuid:00af288f-acaf-48ec-ac90-812f1c8c4988","Seismoelectric interface response: Experimental results and forward model","Schakel, M.D.; Smeulders, D.M.J.; Slob, E.C.; Heller, H.K.J.","","2011","Understanding the seismoelectric interface response is important for developing seismoelectric field methods for oil exploration and environmental/engineering geophysics. The existing seismoelectric theory has never been validated systematically by controlled experiments. We have designed and developed an experimental setup in which acoustic-to-electromagnetic wave conversions at interfaces are measured. An acoustic source emits a pressure wave that impinges upon a porous sample. The reflected electric-wave potential is recorded by a wire electrode. We have also developed a full-waveform electrokinetic theoretical model based on the Sommerfeld approach and have compared it with measurements at positions perpendicular and parallel to the fluid/porous-medium interface. We performed experiments at several salinities. For 10-3 and 10-2 M sodium chloride (NaCl) solutions, both waveforms and amplitudes agree. For 10-4 M NaCl, however, amplitude deviations occur. We found that a single amplitude field scaling factor describes these discrepancies. We also checked the repeatability of experiments. The amplitudes are constant for the duration of an experiment (1–4 hours) but decrease on longer time scales (~24 hours). However, the waveforms and spatial amplitude pattern of the electric wavefield are preserved over time. Our results validate electrokinetic theory for the seismic-to-electromagnetic-wave conversion at interfaces for subsurface exploration purposes.","acoustoelectric effects; geophysical prospecting; geophysical signal processing; seismic waves; seismology","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:5540c1ab-afb9-4cae-b75f-036af4996c85","http://resolver.tudelft.nl/uuid:5540c1ab-afb9-4cae-b75f-036af4996c85","Succes and fail factors in sustainable real estate renovation projects","Volker, L.","","2011","Sustainability remains an important issue for the construction industry. Yet, sustainable real estate developments are still considered as highly ambitious projects. To find out how and why sustainable renovation projects actually became sustainable we systematically evaluated 21 leading Dutch real estate renovation projects. In each project we interviewed the client, consultant, architect and contractor. Based on the results it was concluded that it is not necessary to have a pre-defined (sustainability) ambition in order to realize a project that can be considered sustainable in practice. Most of the respondents indicated that the ambition developed throughout the project, mainly because of the potential sustainable reputation or the parties involved in the project. Ambitions were not set as highly as expected: about half of the respondents consider preservation of the building and recycling as sustainable solutions already. The composition, management and collaboration of the construction team were found to be very important during the process. In this sense sustainable projects do not appear to be any different than regular projects, so then the only question is: Why not sustainable?","ambition; sustainability; real estate renovation; project management; process management","en","conference paper","","","","","","","","","","","","","",""
"uuid:f2793898-5d7d-4771-8706-ec709bd98017","http://resolver.tudelft.nl/uuid:f2793898-5d7d-4771-8706-ec709bd98017","Conceptual framework for potential implementations of multi criteria decision making (MCDM) methods for design quality assessment","Harputlugil, T.; Prins, M.; Gültekin, A.T.; Topçu, Y.I.","","2011","Architectural design can be considered as a process influenced by many stakeholders, each of which has different decision power. Each stakeholder might have his/her own criteria and weightings depending on his/her own perspective and role. Hence design can be seen as a multi-criteria decision making (MCDM) process. Considering architectural design, its evaluation and quality assessment within a context of MCDM is not regularly performed within building processes. The aim of the paper is to find/adapt proper methodologies of MCDM, used in other domains for assessment of design quality, adapt them to the construction domain and test their applicability. Current tools (for instance DQI, DEEP, AEDET, HQI, LEED, BREEAM, BQA) for quality assessment will be reviewed and compared with several MCDM methods (ie. AHP, ANP, PROMETHEE, SAW AND TOPSIS). Advantages and disadvantages of gathered outcomes from comparisons for assessment and applicability within architectural design will be discussed. Finally reflections on the outcomes will be provided.","architectural design quality; Analytic Hierarchy Process (AHP); design quality assessment tools; MUlti Criteria Decision Making (MCDM)","en","conference paper","","","","","","","","","","","","","",""
"uuid:ea80dccd-318e-421d-a9f5-fbad86437a32","http://resolver.tudelft.nl/uuid:ea80dccd-318e-421d-a9f5-fbad86437a32","Releasing the potential of BIM in construction education","Boon, J.; Prigg, C.","","2011","When setting out to teach a group of construction students the lecturer is faced with a class who have a variety of learning style preferences and have brains with a variable ability to process information and that ability varies further with the type of processing required. They also come to the class with varying previous experience and knowledge and with varying social skills. In order to facilitate the learning the students need to be actively engaged in a task designed to cause them to interact with the information they are supposed to be learning. However learning is a complex process that requires considerable management. BIM has the potential to assist construction education in this as it can make information available in a manner that is much more accessible to visual and kinaesthetic learners (the majority of learners). It is not in itself a universal panacea to the problem of teaching construction students. The challenge for construction educators is to use this new form of information provision to enable us to move away from lecture formats and reshape our teaching delivery to a format that is better aligned with the learning styles and processes that suit most learners.","construction education; learning styles; learning processes; building information modelling","en","conference paper","","","","","","","","","","","","","",""
"uuid:6e26eabb-47e3-4877-9f68-61bded01be42","http://resolver.tudelft.nl/uuid:6e26eabb-47e3-4877-9f68-61bded01be42","Fourteen processes defining competitive advantage of Brazilian trade contractors","Oviedo Haito, R.J.J.; Ferreira Cardoso, F.","","2011","Brazilian Trade Contractors (TC), or Subcontractors and Specialty Contractors, are main players in the Brazilian Building Industry competitiveness. They are part of a highly fragmentized and informal chain, with a great diversity in their value propositions and in their organizational forms. Nonetheless, despite their heterogeneity, most of them are SMEs lacking resources, capabilities, and other assets. This lack of assets and a competition led by the lowest bid offering produces the bankruptcy of 50% of them in their fourth year of operation, with negative consequences in the competitiveness of the whole Industry. Hence, understanding the causes of that performance is an important issue to improve TC management practices and, consequently, their performance. This paper focuses on internal factors, specifically, on the internal processes that allow Trade Contractors to achieve a good performance in their specific competitions. For this purpose, data were gathered from a qualitative research in 24 Trade Contractors with good performance and in 7 other agents that hire them, mainly in Sao Paulo - Brazil. Two are the main results: First, fourteen processes conducted by TC were identified. Second, those processes are performed in different configurations in accordance with different value propositions and size of the TC.","trade contractors; subcontractors; competitiveness; process; Brazil","en","conference paper","","","","","","","","","","","","","",""
"uuid:fdafbcd6-6ae0-4414-8efb-51fc5928a851","http://resolver.tudelft.nl/uuid:fdafbcd6-6ae0-4414-8efb-51fc5928a851","Study of the submittal process using lean production principles","Pestana, A.C.V.M.F.; Da Alves, T.C.L.","","2011","In the Architecture, Construction and Engineering (AEC) industry office activities link the information flows from project teams and the production processes on the field. Despite their importance to the overall project, office activities have been overlooked and several authors point out that they are often mismanaged, lack planning, or are buffered to account for the great amount of variability within processes developed at the office level, eventually resulting on site inefficiencies and cost overruns. This paper presents a study of the submittal process through the use of Lean Thinking. Submittals are documents exchanged between the general contractors, subcontractors, the project architect and its team of designers and consultants. Submittals carry information about products and processes used to deliver a project, and are submitted from the parties constructing the project, or supplying materials to it, to the designers so that the submitted information can be checked for conformance to project specifications. The study shows that for the project investigated the submittal process lacked transparency, had low workflow predictability, and showed low levels of reliability. The study concludes that the submittal process can be streamlined by enhancing communication and information sharing amongst stakeholders, through the understanding of the causes of variation in lead times and the understanding of participants needs.","lean construction; lean offic; submittal process","en","conference paper","","","","","","","","","","","","","",""
"uuid:2672e808-b932-43f9-991b-aac0ba07a6f1","http://resolver.tudelft.nl/uuid:2672e808-b932-43f9-991b-aac0ba07a6f1","Standardizing knowledge: A dialective view on architectural knowledge and its managers","Gluch, P.","","2011","Many organizations within the construction industry are currently developing standardized practices. Increased standardization involves new ways of organizing construction projects, changing interrelations between professional groups, setting a new culture, i.e. challenging the institutionalized way of being. It, for instance, leads to a concentration of key knowledge into specific knowledge networks and artifacts. This in turn creates new and/or strengthened roles of expertise within the organizations leading to a reallocation of knowledge, as well as power, from the project setting to centrally organized functions, specialist consultancies and knowledge networks. Based on a case study of one Architect Company, this paper examines the tensions and paradoxes inherent in these new roles. In the study, 13 persons were interviewed; actors responsible for changing practices, developing tools and ensuring learning among employees. The study contributes to theory building within a research field that examines the emergence of new roles and practices in construction and the contradictions which arise leading to tensions and possible conflict. Many of the assumptions that underlie these new practices run counter to the established norms and local practices as well as to construction practitioners intuitions.","social practices; roles; knowledge management; stardardization; construction process; architect company","en","conference paper","","","","","","","","","","","","","",""
"uuid:397d43ab-3b3a-49ee-8eb1-0938f90c7990","http://resolver.tudelft.nl/uuid:397d43ab-3b3a-49ee-8eb1-0938f90c7990","Conceptual framework for potential implementations of multi criteria decision making (MCDM) methods for design quality assessment","Harputlugil, T.; Prins, M.; Tanju Gültekin, A.; Ilker Topçu, Y.","","2011","Architectural design can be considered as a process influenced by many stakeholders, each of which has different decision power. Each stakeholder might have his/her own criteria and weightings depending on his/her own perspective and role. Hence design can be seen as a multi-criteria decision making (MCDM) process. Considering architectural design, its evaluation and quality assessment within a context of MCDM is not regularly performed within building processes. The aim of the paper is to find/adapt proper methodologies of MCDM, used in other domains for assessment of design quality, adapt them to the construction domain and test their applicability. Current tools (for instance DQI, DEEP, AEDET, HQI, LEED, BREEAM, BQA) for quality assessment will be reviewed and compared with several MCDM methods (ie. AHP, ANP, PROMETHEE, SAW AND TOPSIS). Advantages and disadvantages of gathered outcomes from comparisons for assessment and applicability within architectural design will be discussed. Finally reflections on the outcomes will be provided.","architectural design quality, analytic hierarchy process (AHP), design quality assessment tools, multi criteria decision making (MCDM)","en","conference paper","Delft University of Technology","","","","","","","","Architecture","Real Estate and Housing","","","",""
"uuid:ac0d448a-d861-45f7-b806-17c9ca9c31f8","http://resolver.tudelft.nl/uuid:ac0d448a-d861-45f7-b806-17c9ca9c31f8","Collaborative design in a context of sustainability: The epistemological an practical implications of the precautionary principle for design","Cucuzzella, C.","","2011","Sustainable design is an approach that seeks to adopt an ethic of the future, where the vision of the solutions is based on a temporal and spatial perspective that is predominantly long-term and global. Design is characterized by its projective and ambivalent nature, and therefore a conscious effort to anticipate the outcomes of design intentions is crucial. Consequently, all design is inherently laden with uncertainty, doubt, and specifically in some technology-driven design projects - contradictions and controversies. Typically, such uncertainties and contradictions are not considered during the initial phase, since the main goal at this phase is to simplify the problem, and therefore these anomalies are often omitted, as they are seen to be outside the boundaries of the design problem. How can designers consider the uncertainties and contradictions during conceptualization, as well as consider the benefits resulting from their design proposals? Designers in their sustainable design practice must consider (1) the multiple objectives and criteria; (2) the multiple users and user preferences; (3) the multiple design alternatives; (4) the complex changing global situation; and (5) the knowledge from the various disciplines comprising the design project. A collective systems thinking approach to design addresses these concerns. Consequently, the theoretical basis of the precautionary principle is directly in line with this approach to design. This presentation will discuss the epistemological and practical implications of the precautionary principle for design in this context.","collaborative design; sustainability; precautionary principle; integrated design process; fourth generation evaluation","en","conference paper","","","","","","","","","","","","","",""
"uuid:09179df1-7418-4b33-8a75-e0c557c6079c","http://resolver.tudelft.nl/uuid:09179df1-7418-4b33-8a75-e0c557c6079c","Efficient Execution of Video Applications on Heterogeneous Multi- and Many-Core Processors","Pereira de Azevedo Filho, A.","Juurlink, B.H.H. (promotor)","2011","In this dissertation we present methodologies and evaluations aiming at increasing the efficiency of video coding applications for heterogeneous many-core processors composed of SIMD-only, scratchpad memory based cores. Our contributions are spread in three different fronts: thread-level parallelism strategies for many-cores, identification of bottlenecks for SIMD-only cores, and software cache for scratchpad memory based cores. First, we present the 3D-Wave parallelization strategy for video decoding that scales for many-core processors. It is based on the observation that dependencies between frames are related with the motion compensation kernel and motion vectors are usually within a small range. The 3D-Wave strategy combines macroblock-level parallelism with frame- and slice-level parallelism by overlapping the decoding of frames while dynamically managing macroblock dependencies. The 3D-Wave was implemented and evaluated in a simulated many-core embedded processor consisting of 64 cores. Policies for reducing memory footprint and latency are presented. The effects of memory latency, cache size, and synchronization latency are studied. The assessment of SIMD-only cores for the increasing complexity of current multimedia kernels is our second contribution. We evaluate the suitability of SIMD-only cores for the increasing divergent branching in video processing algorithms. The H.264 Deblocking Filter is used as test case. Also, the overhead imposed by the lack of a scalar processing unit for SIMD-only cores is measured using two methodologies. Low area overhead solutions are proposed to add scalar support to SIMD-only cores. Finally, we focus on the memory hierarchy and we propose a new software cache organization to increase the efficiency and efficacy of scratchpad memories for unpredictable and indirect memory accesses. The proposed Multidimensional Software Cache reduces software cache overhead by allowing the programmer to exploit known access behavior in order to reduce the number of accesses to the software cache and by grouping memory requests. An instruction to accelerate MDSC lookup is also presented and analyzed.","Video Processing; Parallel Processing; Processor Architecture; Scratchpad Memory; Software Cache; SIMD Processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:55fabcd8-0435-48ba-aba0-a0bad1e05033","http://resolver.tudelft.nl/uuid:55fabcd8-0435-48ba-aba0-a0bad1e05033","Controlled-source interferometric redatuming by crosscorrelation and multidimensional deconvolution in elastic media","Van der Neut, J.R.; Thorbecke, J.W.; Mehta, K.; Slob, E.C.; Wapenaar, C.P.A.","","2011","Various researchers have shown that accurate redatuming of controlled seismic sources to downhole receiver locations can be achieved without requiring a velocity model. By placing receivers in a horizontal or deviated well and turning them into virtual sources, accurate images can be obtained even below a complex near-subsurface. Examples include controlled-source interferometry and the virtual-source method, both based on crosscorrelated signals at two downhole receiver locations, stacked over source locations at the surface. Because the required redatuming operators are taken directly from the data, even multiple scattered waveforms can be focused at the virtual-source location, and accurate redatuming can be achieved. To reach such precision in a solid earth, representations for elastic wave propagation that require multicomponent sources and receivers must be implemented. Wavefield decomposition prior to crosscorrelation allows us to enforce virtual sources to radiate only downward or only upward. Virtual-source focusing and undesired multiples from the overburden can be diagnosed with the interferometric point-spread function (PSF), which can be obtained directly from the data if an array of subsurface receivers is deployed. The quality of retrieved responses can be improved by filtering with the inverse of the PSF, a methodology referred to as multidimensional deconvolution.","acoustic wave interferometry; correlation methods; deconvolution; filtering theory; geophysical signal processing; geophysical techniques; seismic waves; seismology","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:32b4ed4a-f4e8-4852-99fb-fd405aa7da6a","http://resolver.tudelft.nl/uuid:32b4ed4a-f4e8-4852-99fb-fd405aa7da6a","On the mechanical efficiency of dielectric barrier discharge plasma actuators","Giepman, R.H.M.; Kotsonis, M.","","2011","The mechanical power production and electrical power consumption of the dielectric barrier discharge plasma actuator is investigated for different operating conditions. The ratio of these two values delivers the mechanical efficiency of the actuator as a flow acceleration device. The general trend is that higher carrier frequencies and voltages lead to higher values of the efficiency. The values that were found for the mechanical efficiency are very small, the highest recorded value is only 0.18%.","discharges (electric); plasma applications; plasma transport processes","en","journal article","American Institute of Physics","","","","","","","","Aerospace Engineering","Aerodynamics & Wind Energy","","","",""
"uuid:6bccace9-7d74-4235-b9b0-4ff0a49aa971","http://resolver.tudelft.nl/uuid:6bccace9-7d74-4235-b9b0-4ff0a49aa971","The influence of the workplace on perceived productivity","Maarleveld, M.; De Been, I.","","2011","Increasing productivity, stimulating knowledge sharing and satisfying employees. Three objectives which are heard quite often during the design phase of an office. Both latter objectives are often perceived as ways to increasing productivity as well. The Center for People and Buildings (CfPB) in Delft, The Netherlands, has conducted a number of case studies into employee satisfaction with the working environment and perceived productivity – i.e the extent to which employees appraise the physical environment of the office as supporting their productivity.. This paper focuses on physical characteristics of the office that might influence the perceived productivity. According to our data (over 10.000 respondents from 71 case studies), the ability to concentrate has a substantial influence on the perceived productivity in general, as well as of the individual, the team and the organisation. Respondents that are more satisfied with the ability to concentrate are also more likely to experience the workplace as supportive for their productivity. The possibility to communicate only has impact on the perceived team and organisational productivity. According to the results, employees rate the general productivity primarily on the basis of their individual productivity, rather than team productivity or organisational productivity. In connection to work processes it appeared that for particular work processes employees judge the functionality and comfort of the workplace as most important in affecting their perceived productivity. These research findings may help facility managers in dealing with workplace design and workplace management. It gives the facility manager a solid input to decision making about the best possible office concept by taking into account the preferences of employees.","productivity; employee satisfaction; work environment; work processes; Center for People and Buildings","en","conference paper","EuroFM","","","","","","","","Architecture and The Built Environment","Real Estate and Housing","","","",""
"uuid:2ae9e4c3-efa3-452a-8d24-a33c12368bd2","http://resolver.tudelft.nl/uuid:2ae9e4c3-efa3-452a-8d24-a33c12368bd2","A hybrid replenishment model, the best fit in fast growing industries","Hartevelt, R.","","2011","During the design supply chain control processes, balancing cost versus service plays an important role. To select the most suitable replenishment strategy is the main enabler in reaching the goal of finding the optimal balance. In this article a framework is developed which will help to select the right replenishment strategy and to design a supply chain control process that supports companies to secure the results of improvements. During the evaluation of replenishment strategies in the design phase of the project it becomes clear that in specific situations one single replenishment strategy will not cover the overall control need in supply chains, especially in fast growing or emerging markets. In those specific situations the best replenishment strategy is a combination of re-order-point and kanban. This paper is based on experience gained during a supply chain control study at a Philips business.","Supply chain design; Supply chain control process; Replenishment strategy; Planning methodology; Supply chain performance management","en","journal article","Hartevelt R.","","","","","","","2011-06-23","Technology, Policy and Management","Infrastructure Systems & Services","","","",""
"uuid:1d6d4d70-a458-4e3b-954c-969cef7dc2e8","http://resolver.tudelft.nl/uuid:1d6d4d70-a458-4e3b-954c-969cef7dc2e8","Separation of blended data by iterative estimation and subtraction of blending interference noise","Mahdad, A.; Doulgeris, P.; Blacquiere, G.","","2011","Seismic acquisition is a trade-off between economy and quality. In conventional acquisition the time intervals between successive records are large enough to avoid interference in time. To obtain an efficient survey, the spatial source sampling is therefore often (too) large. However, in blending, or simultaneous acquisition, temporal overlap between shot records is allowed. This additional degree of freedom in survey design significantly improves the quality or the economics or both. Deblending is the procedure of recovering the data as if they were acquired in the conventional, unblended way. A simple least-squares procedure, however, does not remove the interference due to other sources, or blending noise. Fortunately, the character of this noise is different in different domains, e.g., it is coherent in the common source domain, but incoherent in the common receiver domain. This property is used to obtain a considerable improvement. We propose to estimate the blending noise and subtract it from the blended data. The estimate does not need to be perfect because our procedure is iterative. Starting with the least-squares deblended data, the estimate of the blending noise is obtained via the following steps: sort the data to a domain where the blending noise is incoherent; apply a noise suppression filter; apply a threshold to remove the remaining noise, ending up with (part of) the signal; compute an estimate of the blending noise from this signal. At each iteration, the threshold can be lowered and more of the signal is recovered. Promising results were obtained with a simple implementation of this method for both impulsive and vibratory sources. Undoubtedly, in the future algorithms will be developed for the direct processing of blended data. However, currently a high-quality deblending procedure is an important step allowing the application of contemporary processing flows","data acquisition; geophysical signal processing; iterative methods; least squares approximations; seismology; signal denoising","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:700dba13-23cb-44b3-93f8-9c71ff00a8f3","http://resolver.tudelft.nl/uuid:700dba13-23cb-44b3-93f8-9c71ff00a8f3","Underwater detection, classification and localisation: Improving the capabilities of towed sonar arrays","Colin, M.E.G.D.","Simons, D.G. (promotor); Blacquiere, G. (promotor)","2011","The end of the Cold War and the collapse of the Warsaw pact have resulted in a change of operational theatre for the naval forces of the North Atlantic Treaty Organisation (NATO). In particular, the focus of Anti Submarine Warfare forces has shifted from tracking Soviet nuclear ballistic missile submarine in the deep waters of the Atlantic ocean to hunting smaller and quieter Diesel electric submarines in coastal water. In most scenarios, towed array sonars are the best sensor to detect, classify and localise submarines. The long passive towed array sonars used during the Cold war are more difficult to use in coastal waters and are being replaced by most Navies by Low Frequency Active Sonars (LFAS) using a towed source and shorter towed receiving array. These shorter towed arrays can be used in both active and passive modes. In passive mode, their reduced size offer limited performance compared to their longer equivalent. In active mode, they can detect submarines at long ranges in shallow waters but are plagued by false alarms caused by echoes from features of the seafloor. This thesis deals with algorithms improving Detection, Classification and Localisation for towed sonar arrays, with a specific focus on LFAS sonars. In Chapter 2, we derive, analyse and apply to measured data a method for improving detection performance with short passive towed arrays. An important issue in detection of quiet acoustic source with short towed arrays is the improvement in signal-to-noise ratio (SNR) and bearing resolution for targets emitting low frequency signals. One of the techniques believed to improve these characteristics is Synthetic Aperture Sonar (SAS). The method is based on the artificial enlargement of a sonar array by coherently integrating acoustic snapshots at different antenna positions. We first derive theoretical measures of performance of passive SAS and report on its application in combination with other signal-processing algorithms. Its theoretical performance is compared with that of the frequently used incoherent integration. The used passive SAS algorithm is the method known as Extended Towed Array Measurement (ETAM) or the overlap correlator. It is based on the correlation of data snapshots on overlapping hydrophones. Correlation is a key issue in this method and since it is affected by noisy targets, some gain can be expected from noise cancellation. The influence on the performance of ETAM of a method of tow ship noise cancelling at hydrophone level (Inverse Beam Forming, IBF) is analysed. This approach increases ETAM performance by removing a loud and highly correlated noise source, the tow ship, and thus enhancing the other targets in the beam pattern. The results of the algorithms applied to two experimental datasets show that they bring an improvement close to theoretical expectations. Port starboard discrimination and the successful combination of IBF with ETAM make this approach innovative. In Chapter 3, methods for improving the localisation of a source with a short towed array are analysed and applied to data, both simulated and measured at sea. Localisation performance with sonar towed array is related to the array length. The knowledge of the position of a given acoustic source gives a critical tactical advantage to a ship. There are a limited number of ways to estimate the range of a source with a towed passive sonar, most requiring the towing platform to execute a manoeuvre. These manoeuvres are undesirable as they take a lot of time, cause bending of the towed array and can even put the towing platform in harm’s way. We present a number of source position estimation methods for both broadband and narrowband sources suitable for short towed arrays. Recursive methods based on the extended Kalman filter are first examined. A new method based on the integration of time delay of arrival measurements along the sonar path is described. We derive theoretical performance indicators and show that this method gives the possibility to estimate the position and speed of a source without a manoeuvre. In Chapter 4, the Classification performance of a broadband waveform is analysed and measured on data collected at sea. Like any long-range active sonar system, LFAS produces a large amount of unwanted sea bottom echoes or clutter. These echoes give rise to false alarms that increase the computational load of target trackers and jeopardise the correct classification of each echo. The number of false alarms due to clutter can be reduced either through echo classification techniques or through Doppler filtering provided the targets of interest are in motion. Much research has been carried out on waveform investigation for the efficient use of bandwidth capabilities of modern sonar transmitters. Among the quantity of waveforms, Binary Phase Shift Keyed (BPSK) pulses have emerged as exhibiting cross-correlation properties relevant to Doppler filtering while maintaining a range resolution comparable to Frequency Modulated (FM) pulses. We have successfully applied a false alarm reduction technique using contacts obtained with an FM pulse subsequently processed by Doppler filtering with a BPSK pulse. The Doppler classification performance for this pulse is evaluated on an experimental dataset and a few limitations of BPSK are identified.","sonar; detection; classification; localisation; signal processing; time delay; waveform","en","doctoral thesis","TNO","","","","","","","","Aerospace Engineering","Aircraft Transport and Operation","","","",""
"uuid:239568db-562f-4c44-8fa9-c60c6310e3c2","http://resolver.tudelft.nl/uuid:239568db-562f-4c44-8fa9-c60c6310e3c2","Testing facility for hydrogen storage materials designed to simulate application based conditions","Westerwaal, R.J.; Nyqvist, R.G.; Haije, W.G.","","2011","For the daily use of hydrogen storage materials, not only their intrinsic storage properties are important, but also equally important is the performance under practical conditions. Besides the techniques already available for the fundamental characterization of storage materials, there is a growing need to test storage materials under conditions resembling day-to-day use. For that we developed and tested a downscaled hydrogen storage reactor with which it is possible to monitor the hydrogenation behavior under nonideal conditions. Here we present a characterization of the developed reactor setup which enables a fast screening of storage materials. For characterization and calibration purposes, we use the rather well-documented LaNi5–Al alloy as reference. The found experimental results agree well with the properties of LaNi5–Al as reported in literature. Our results show that this reactor setup enables an efficient screening of new developed storage alloys under realistic conditions and is therefore complementary to the already existing characterization setups.","aluminium alloys; calibration; chemical reactors; hydrogen storage; hydrogenation; lanthanum alloys; nickel alloys; process monitoring; test facilities","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","ChemE/Chemical Engineering","","","",""
"uuid:335a8f1a-5af7-4143-af1e-208f44526ba6","http://resolver.tudelft.nl/uuid:335a8f1a-5af7-4143-af1e-208f44526ba6","Busemann functions and equilibrium measures in last passage percolation models","Cator, E.; Pimentel, L.P.R.","","2011","The interplay between two-dimensional percolation growth models and one-dimensional particle processes has been a fruitful source of interesting mathematical phenomena. In this paper we develop a connection between the construction of Busemann functions in the Hammersley last-passage percolation model with i.i.d. random weights, and the existence, ergodicity and uniqueness of equilibrium (or timeinvariant) measures for the related (multi-class) interacting fluid system. As we shall see, in the classical Hammersley model, where each point has weight one, this approach brings a new and rather geometrical solution of the longest increasing subsequence problem, as well as a central limit theorem for the Busemann function.","Hammersley process; Last passage percolation; Busemann functions; Equilibrium","en","journal article","Springer Verlag","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:2177953b-48b8-43ba-a05e-f375ed3a44d5","http://resolver.tudelft.nl/uuid:2177953b-48b8-43ba-a05e-f375ed3a44d5","Barriers and impediments to transformational government: Insights from literature and practice","Van Veenstra, A.F.; Klievink, B.; Janssen, M.F.W.H.A.","","2011","Transformational government (t-government) has been introduced as a new stage of e-government aimed at realising structural changes and greater benefits in the public sector. Yet, there are many impediments blocking transformation, and there is limited insight in these barriers. In this paper, impediments for t-government are investigated by conducting a literature review and carrying out three case studies. The impediments found in literature were confirmed and extended using the case studies. Impediments simultaneously occur on the governance, organisational and managerial, and technical level and need to be addressed in concert. Research on transformation can benefit from understanding these interrelated impediments.","e-government; t-government; transformation; impediments; barriers; process reengineering","en","journal article","Inderscience Publishers","","","","","","","","Technology, Policy and Management","Infrastructure Systems & Services","","","",""
"uuid:8b9642e5-bc6f-4ad0-a413-fa5f505cc2ad","http://resolver.tudelft.nl/uuid:8b9642e5-bc6f-4ad0-a413-fa5f505cc2ad","Reconfigurable digital receiver design and application for instantaneous polarimetric measurement","Wang, Z.; Krasnov, O.A.; Babur, G.P.; Ligthart, L.P.; Van der Zwan, F.","","2011","This paper presents the development of a reconfigurable receiver to undertake challenging signal processing tasks for a novel polarimetric radar system. The field-programmable gate arrays (FPGAs)-based digital receiver samples incoming signals at intermediate frequency (IF) and processes signals digitally instead of using conventional analog approaches. It offers more robust system stability and avoids unnecessary multichannel calibrations of analog circuits for a full polarimetric radar. Two kinds of dual-orthogonal signals together with corresponding processing algorithms have been investigated; the digital implementation architectures for all algorithms are then presented. Processing algorithms implemented in FPGA chips can be reconfigured adaptively regarding to different transmitted waveforms without modification of hardware. The successful development of such reconfigurable receiver extends our radar capacity and thus yields tremendous experimental flexibility for atmospheric remote sensing and polarimetric studies of ground-based targets.","radar signal processing and system modeling; radar architecture and systems","en","journal article","Cambridge University Press","","","","","","","2012-04-06","Electrical Engineering, Mathematics and Computer Science","International Research Centre for Telecommunications and Radar, IRCTR","","","",""
"uuid:305ba301-6f32-4a8b-82b4-29214f6a31d7","http://resolver.tudelft.nl/uuid:305ba301-6f32-4a8b-82b4-29214f6a31d7","The algebraic difference of two random Cantor sets: The Larsson family","Dekking, M.; Simon, K.; Székely, B.","","2011","In this paper, we consider a family of random Cantor sets on the line and consider the question of whether the condition that the sum of the Hausdorff dimensions is larger than one implies the existence of interior points in the difference set of two independent copies. We give a new and complete proof that this is the case for the random Cantor sets introduced by Per Larsson.","random fractals; random iterated function systems; differences of Cantor sets; Palis conjecture; multitype branching processes","en","journal article","Institute of Mathematical Statistics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:dc1eb372-1686-416c-89df-29bd007da4df","http://resolver.tudelft.nl/uuid:dc1eb372-1686-416c-89df-29bd007da4df","Source depopulation potential and surface-wave tomography using a crosscorrelation method in a scattering medium","Gouedard, P.; Roux, P.; Campillo, M.; Verdel, A.R.; Yao, H.; Van der Hilst, R.D.","","2011","We use seismic prospecting data on a 40 × 40 regular grid of sources and receivers deployed on a 1 km × 1 km area to assess the feasibility and advantages of velocity analysis of the shallow subsurface by means of surface-wave tomography with Green's functions estimated from crosscorrelation. In a first application we measure Rayleigh-wave dispersion curves in a 1D equivalent medium. The assumption that the medium is laterally homogeneous allows using a simple projection scheme and averaging of crosscorrelation functions over the whole network. Because averaging suppresses noise, this method yields better signal-to-noise ratio than traditional active-source approaches, and the improvement can be estimated a priori from acquisition parameters. We find that high-quality dispersion curves can be obtained even when we reduce the number of active sources used as input for the correlations. Such source depopulation can achieve significant reduction in the cost of active source acquisition. In a second application we compare Rayleigh-wave group velocity tomography from raw and reconstructed data. We can demonstrate that the crosscorrelation approach yields group velocity maps that are similar to active source maps. Scattering has an importance here as it may enhance the crosscorrelation performance. We quantify the scattering properties of the medium using mean free path measurements from coherent and incoherent parts of the signal. We conclude that for first-order velocity analysis of the shallow subsurface, the use of crosscorrelation offers a cost-effective alternative to methods that rely exclusively on active sources.","correlation methods; geophysical prospecting; geophysical signal processing; Green's function methods; Rayleigh waves; seismology; tomography","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:66621333-dc12-458f-8536-1e733fe71eb8","http://resolver.tudelft.nl/uuid:66621333-dc12-458f-8536-1e733fe71eb8","Design and fabrication of single grain TFTs and lateral photodiodes for low dose X-ray detection","Arslan, A.; Ishihara, R.; Derakhshandeh, J.; Beenakker, C.I.M.","","2011","Design, fabrication and measurement results of single grain (SG) lateral PIN photodiodes and SG thin film transistors (TFT) are reported in this paper. Devices were developed to be used in indirect X-ray image sensor pixel design. We have controlled position of 6 ?m x 6 ?m silicon grains with excimer-laser crystallization of a-Si film. Lateral PIN photodiode (PD) arrays were designed inside the single grain with 1 ?m, 1.5 ?m and 2 ?m intrinsic region length and 4 ?m width. The gate length and the width of the fabricated TFTs are 1.5 ?m and 4 ?m, respectively. Devices were fabricated using a-Si, SOI and crystalline silicon layers and electrical measurement results were compared. 100 ?m x 100 ?m sizes SG-photodiodes have dark and saturation currents on the order of 0.1 nA and 10 nA resulting in a light sensitivity of 200 with an exposure of white light. Fabricated NMOS and PMOS TFTs inside the grains have field effect mobility of 526 cm2/Vs and 253 cm2/Vs, respectively.","X-ray, thin-film-transistor (TFT), single grain, large area detection, image sensor, ?-Czochralski process","en","conference paper","SPIE","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","","",""
"uuid:3860d68b-e335-41ea-892d-e05a06f7ce3a","http://resolver.tudelft.nl/uuid:3860d68b-e335-41ea-892d-e05a06f7ce3a","Conformal antenna array for ultra-wideband direction-of-arrival estimation","Liberal, I.; Caratelli, D.; Yarovoy, A.","","2011","The design and full-wave analysis of an antenna system for ultra-wideband radio direction finding applications is presented. The elliptical dipole antenna is selected as antenna element due to its robust circuital and radiation properties. The influence of the conformal deformation on the antenna performance has been studied in details. A suitable radome is designed to enhance the antenna front-to-back radiation ratio, as well as to increase the environmental durability of the structure. The considered antennas are optimized for their adoption in two different sub-arrays covering the [250, 950] MHz and [0.9, 3.3] GHz frequency bands, respectively. A uniform circular array (UCA) with five elements is used for the array topology. The full-wave analysis of the whole array structure is carried out in order to evaluate the coupling between the antenna elements. In particular, a novel calibration technique is developed in order to compensate for the mutual coupling between the array elements, possible variations in the antenna characteristics, and the effects of the array bearing structure. The performance of the designed array in terms of direction-of-arrival estimation is thoroughly analyzed and discussed.","antenna design; modeling and measurements; radar applications; radar signal processing and system modeling","en","journal article","Cambridge University Press","","","","","","","2012-03-08","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","",""
"uuid:63a3906b-de41-4650-a9be-de667defe827","http://resolver.tudelft.nl/uuid:63a3906b-de41-4650-a9be-de667defe827","Automatic diagnosis and control of distributed solid state lighting systems","Dong, J.; Van Driel, W.; Zhnag, G.","","2011","This paper describes a new design concept of automatically diagnosing and compensating LED degradations in distributed solid state lighting (SSL) systems. A failed LED may significantly reduce the overall illumination level, and destroy the uniform illumination distribution achieved by a nominal system. To our knowledge, an automatic scheme to compensate LED degradations has not yet been seen in the literature, which requires a diagnostic step followed by control reconfigurations. The main challenge in diagnosing LED degradations lies in the usually unsatisfactory observability in a distributed SSL system, because the LED light output is usually not individually measured. In this work, we tackle this difficulty by using pulse width modulated (PWM) drive currents with a unique fundamental frequency assigned to each LED. Signal processing methods are applied in estimating the individual illumination flux of each LED. Statistical tests are developed to diagnose the degradation of LEDs. Duty cycle of the drive current signal to each LED is re-optimized once a fault is detected, in order to compensate the destruction of the uniform illumination pattern by the failed LED.","systems design; light-emitting diodes; illumination design; process monitoring and control; OA-Fund TU Delft","en","journal article","Optical Society of America","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","",""
"uuid:6568947a-fae9-4266-87a2-0204ce940da9","http://resolver.tudelft.nl/uuid:6568947a-fae9-4266-87a2-0204ce940da9","Morphodynamics of seasonally closed coastal inlets at the central coast of Vietnam","Tran, T.T.","Stive, M.J.F. (promotor)","2011","Situated in a monsoon-prone humid tropical region, Vietnam is affected by both oceanic and continental climates causing disasters to the country like riverine flooding and storm induced damage. The coastal districts of Vietnam have a population of about 18 million habitants, account for nearly one fourth of the total population of the country and locate about 50% of the major towns and cities of Vietnam. Most of the people currently living in the coastal zone have their livelihood mainly relying on marine resources and they are also the most vulnerable to sea-related natural disasters, such as storms and floods. The natural disasters occurring in the coastal strip in the central part of Vietnam, caused by meteorological and oceanographical factors, are intensified by human interventions, like the damming of rivers for various purposes or the extensive deforestation for the creation of agricultural lands. With more than 1,000 km of coastline, the central coast of Vietnam has more than sixty inlets and river mouths discharging into the South China Sea. These systems play a vital role in social-economic activities in the region. The steep rivers with abundant natural but temporally unevenly distributed flows make the low-lying coastal plains in the region prone to inundation by flooding, while the river is almost dry during the rest of the year. Specific topographical features and hydrological characteristics of the region produce a particularly high seasonal geomorphological variation of tidal inlets and river mouths, from narrowing, shoaling or entirely closing in the dry season to widening or breaching in the flood period. Frequent disasters set back development efforts in this poorest region of Vietnam and trap people in a cycle of poverty. Stabilising inlets at the central coast of Vietnam therefore is recognised as one of the priority tasks to mitigate potential risks caused by natural disasters, especially by floods and storms on low-lying coastal plains, and to promote a safe and stable condition for social-economic development in the region. To carry out this task, Vietnam needs both substantial financial and human resources, particularly knowledge and experience in coastal engineering, which is not trivial for a developing country. Additionally, strong seasonal variation of inlets and estuaries contribute to the complexity of problems and raise a necessity to implement a strategy for inlet and river mouth stabilisation under the constraints of a shortage of resources and knowledge. This thesis focuses on tidal inlets and estuaries in a wave-dominated, micro-tidal environment under the influence of episodic river flooding in the central coast of Vietnam. Natural behaviour and morphological stability of tidal inlets, which significantly interact with channel migration, entrance shoaling or closure have been identified and analysed based on field observations, historical satellite images, topographical maps and bathymetrical data. Based on the regional natural settings and hydrodynamic-morphological features, tidal inlets along the central coast of Vietnam can be divided into two main categories, namely, (1) barrier lagoon inlets and (2) wave dominated estuary inlets. A conceptual model for channel evolution and seasonal opening/closure of tidal inlets is proposed which describes the cyclic evolution of a typical tidal inlet at the central coast of Vietnam. In the conceptual model, the inlet entrance is forced both by the alongshore current which deposits sediment in the inlet channel and by the ebb tidal and river generated currents which erode sediment from the inlet channel. The interpretation of the Escoffier diagram is extended conceptually to explain the seasonal variation of both open equilibrium and closure. The variation is regulated by the seasonal variation of river flow and littoral drift. The conceptual model indicates the two major processes which dominate in the dry and the flood season leading to a deviation from the stable and unstable equilibrium points in the Escoffier diagram. This supports our understanding of seasonal variation of coastal inlets and estuaries in a region that experiences monsoons and storms causing a large fluctuation in littoral drift and ebb flow at the central coast of Vietnam. To get deeper insight into the underlying processes and cross-sectional stability of an schematised tidal inlet, regulated by tides only and regulated by both tides and waves, the process-based morphodynamic modelling system Delft-3D has been applied. In the model the tidal period, amplitude, basin area and initial inlet dimensions were changed systematically to create different hydrodynamic environments for inlet evolution. The model successfully reproduces the evolution of the channel flow area towards equilibrium for a tidal inlet and is able to describe the main behaviour of an inlet in response to a range of tide and wave conditions and geometries. The model results are in good agreement with empirical relationships (O'Brien, 1969; Jarrett, 1976) and the analytical solution (DiLorenzo, 1988) of Escoffier's diagram. To investigate location stability of inlet channels, seven experiments were designed to cover 3 different stability ranges (poor, fair to good stability). Reliable model results increase the understanding of the processes underlying the migration and closure of a tidal inlet. It is found that tidal inlet behaviour and location stability is linked to the number of channels on the ebb delta, the curvature if there is only one channel, type of bar on the ebb delta, the migration of the updrift barrier island, the distance between the inlet throat and the outer of margin of entrance bar. The model results demonstrate that the process-based model is able to reproduce the morphological evolution of a tidal inlet fairly consistent with the Bruun et al. (1978) empirical criteria for location stability. A typical example of a tidal inlet migrating due to oblique waves which includes features such as ebb channel migration, shifting and diminishing, and the bypassing of ebb shoals from the updrift to the downdrift barriers is investigated and discussed in detail. In another case inlet closure due to prolongation of the ebb channel and infilling with littoral-drift material in the foreshore is also observed. Furthermore, the model results indicate that Escoffier's closure curve is solely applicable to the stability of the channel gorge and thus insufficient to explain the closure of a tidal inlet due to littoral sand infilling into the main ebb channel. In this study solutions are developed for the stabilisation of tidal inlets at the central coast of Vietnam. The solutions are based on the natural behaviour and evolution of two different types of tidal inlets in the region, namely 1) barrier lagoon inlets and 2) inlets formed at the mouth of wave dominated estuaries. For each type of inlet, both short-term and long-term solutions as well as structural and non-structural solutions are taken into account. The solution for the stabilisation of inlets at the central coast of Vietnam is to restrict and/or response to problems. To verify proposed solutions for the stabilisation of inlets along the central coast of Vietnam, process-based modelling is employed to simulate the evolution of a schematised tidal inlet that is stabilised by two jetties and by using river flow to flush the inlet channel. The simulation results show that the inlet after stabilisation by jetties remains open but the inlet channel is highly variable due to the accumulation and erosion of sediment in between two jetties. A sedimentation and erosion pattern is found which is related to the distance in between the two jetties and the strength of the tidal power. An optimum distance between the two jetties that takes into account the effectiveness of the jetties and the structural safety during a major flood event need further study. For inlets that are stabilised by using river flow to flush the inlet, a set of simulation scenarios in which different flushing discharges and flushing durations was designed. For a limited number of simulation scenarios, the model results show that with the same amount of flushing volume, the scenario that has a longer flushing duration and a sufficient flushing discharge is more efficient than the scenario that uses a high flushing discharge over a short duration. This means that the flushing efficiency is closely related to the flushing duration rather than the flushing discharge. Moreover, the flushing moment (at the beginning of the ebb phase or at the beginning of the flood phase) will also contribute to the efficiency of the solution but needs more study.","tidal inlet; central coast Vietnam; process-based modeling; inlet stability; dynamic equilibrium; inlet stabilization","en","doctoral thesis","Ipskamp Drukkers","","","","","","","2011-01-18","Civil Engineering and Geosciences","Department of Hydraulic Engineering","","","",""
"uuid:8fc60786-7473-495c-ad94-577ec6081aae","http://resolver.tudelft.nl/uuid:8fc60786-7473-495c-ad94-577ec6081aae","Present & Future: Visualising ideas of water infrastructure design","Poolman, M.I.","Van de Giesen, N.C. (promotor)","2011","In redevelopment and redesign of small water structures local water governing institutions are increasingly required to and requesting that the planning processes are set up in a participatory manner. Participatory decision making processes are set up to bring stakeholders with different backgrounds, ideas, experiences and expertise together. Ideally they work collectively towards finding a solution to a problem situation. Because of their differences, stakeholders often have different ideas about the problem situation and about the ways to solve it. Discussions take place and ideas are expressed in words or text as each stakeholder tries to explain his view of the situation and possible solution. The mind, however, is more slowly stirred by the ear than by the eye. From literature about previous research activities in sociology, anthropology and systems thinking it was learnt that pictorial visuals can be used to stimulate participants to take part and strongly contribute to the analysis of the situation. Visuals could provide a better understanding of a subject than words alone could. During this research a methodology called yourScape was developed. It is made up of a number of steps that enable and stimulate stakeholders to make and use two-dimensional, still (non-moving) visuals of their ideas of small water infrastructures at present and in the future. In asking stakeholder to make their own visuals by drawing or making collages yourScape is rather unique compared to other participatory methodologies that use visuals. The research shows that own-made visuals can help stakeholders identify which differences and similarities there are in their ideas of the problem situation and of possible solutions. Through group discussions stakeholders collectively identify and analyse what these differences mean for continued work on redevelopment and redesign of small water infrastructures.","participation; water management; infrastructure design; visuals; stakeholders; decision-making process","en","doctoral thesis","VSSD","","","","","","","","Civil Engineering and Geosciences","Water Resources Management","","","",""
"uuid:703c9ef4-623e-4805-949e-485ec4d232d1","http://resolver.tudelft.nl/uuid:703c9ef4-623e-4805-949e-485ec4d232d1","An Empirical Analysis of Stakeholders’ Influence on Policy Development: The Role of Uncertainty Handling","Bijlsma, R.M.; Bots, P.W.G.; Wolters, H.A.; Hoekstra, A.Y.","","2011","Stakeholder participation is advocated widely, but there is little structured, empirical research into its influence on policy development. We aim to further the insight into the characteristics of participatory policy development by comparing it to expert-based policy development for the same case. We describe the process of problem framing and analysis, as well as the knowledge base used. We apply an uncertainty perspective to reveal differences between the approaches and speculate about possible explanations. We view policy development as a continuous handling of substantive uncertainty and process uncertainty, and investigate how the methods of handling uncertainty of actors influence the policy development. Our findings suggest that the wider frame that was adopted in the participatory approach was the result of a more active handling of process uncertainty. The stakeholders handled institutional uncertainty by broadening the problem frame, and they handled strategic uncertainty by negotiating commitment and by including all important stakeholder criteria in the frame. In the expert-based approach, we observed a more passive handling of uncertainty, apparently to avoid complexity. The experts handled institutional uncertainty by reducing the scope and by anticipating windows of opportunity in other policy arenas. Strategic uncertainty was handled by assuming stakeholders’ acceptance of noncontroversial measures that balanced benefits and sacrifices. Three other observations are of interest to the scientific debate on participatory policy processes. Firstly, the participatory policy was less adaptive than the expert-based policy. The observed low tolerance for process uncertainty of participants made them opt for a rigorous “once and for all” settling of the conflict. Secondly, in the participatory approach, actors preferred procedures of traceable knowledge acquisition over controversial topics to handle substantive uncertainty. This excluded the use of expert judgment only, whereas the experts relied on their judgment in the absence of a satisfactory model. Thirdly, our study provides empirical evidence for the frequent claim that stakeholder involvement increases the quality of the knowledge base for a policy development process. Because these findings were obtained in a case that featured good process management and a guiding general policy framework from higher authorities, they may not generalize beyond such conditions.","environmental policy; framing; participation; policy development; policy process; stakeholder involvement; uncertainty","en","journal article","Resilience Alliance","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:0b5ce511-d90e-4f6e-a4f9-8e263d88bd2d","http://resolver.tudelft.nl/uuid:0b5ce511-d90e-4f6e-a4f9-8e263d88bd2d","Management of Urban Development Processes in the Netherlands: Governance, Design, Feasibility","","Franzen, A.J. (editor); Hobma, Fred (editor); de Jonge, H. (editor); Wigmans, G. (editor)","2011","Urban interventions are vital to the city. These may involve renewal of inner city areas, transformation of port and industrial areas, industrial renewal, development of new residential areas, the rehabilitation of the historic centre of a town or the development of leisure areas in a city, just to list a few. These various interventions are also given different names, such as urban re-development, urban renewal, urban revitalisation and urban regeneration. In this book we summarise these different interventions under the term ‘urban area development’.
Whether it's a minor surgery or major intervention with either modest ambitions or big ones, these interventions have something in common and it is that they should be managed from conception to realisation. As the title of this book suggests Management of Urban Development Processes in the Netherlands is about the entire process of managing urban development and covers the full life-cycle of urban areas. Secondarily, the book elaborates on the Dutch approach. The focus is not on comparing Dutch urban area development with the practice in other countries. Nor is it our aim to position Dutch urban area development in an international framework. What the book does aim to do is provide an understanding of current practice and an overview of acquired knowledge and instruments developed in the Netherlands. This is illustrated by (mainly) Dutch examples.","management; urban development; processes; governance; design; feasibility","en","book","Technepress","978-90-8594-029-6","","","","","","","","","Practice Chair Urban Area Development","","",""
"uuid:415efa15-0995-4f8d-a7be-845860b21181","http://resolver.tudelft.nl/uuid:415efa15-0995-4f8d-a7be-845860b21181","Challenges and opportunities of the passive house concept for retrofit","Mlecnik, E.","","2010","For newly built houses and renovations European and national ambitions prescribe increasing levels of energy performances, even including achieving the passive house standard, net zero energy or carbon neutral houses. For highly energy-efficient renovation, project information from first demonstration projects is now becoming available. This paper examines experiences of demonstration projects with improved energy performance, in order to diffuse these experiences to reach other innovators and the early adopter market. Innovation diffusion theory is used to analyse examples of residential renovations using passive house technologies. Further the paper examines challenges and opportunities for the diffusion of demonstrated solutions to an early adopter market. Detailed case studies show that passive house retrofit, as well as low energy retrofit, need more holistic approaches, higher skill competence and strong process coordination. The results show that it is technically feasible to reach outstanding energy performance in renovation. However, social, political and economical issues remain important barriers to reach a more substantial market share. In particular there is a need to cluster energy efficiency principles to focus on substantial energy savings. The research leads to ideas for further study of the possible role of change agencies to support substantial energy reduction in retrofit projects.","renovation; energy efficiency; passive house; innovation diffusion; building process","en","conference paper","University of Salford","","","","","","","","OTB Research Institute","Housing Quality and Process Innovation","","","",""
"uuid:4a756395-4d4b-4a7c-83bd-2cc530e72044","http://resolver.tudelft.nl/uuid:4a756395-4d4b-4a7c-83bd-2cc530e72044","Overview of biorefineries based on co-production of furfural, existing concepts and novel developments","De Jong, W.; Marcotullio, G.","","2010","","green process, biorefinery, biomass, furfural","en","journal article","De Gruyter","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:98f76195-3e2a-4d00-aa44-5949b0c946d9","http://resolver.tudelft.nl/uuid:98f76195-3e2a-4d00-aa44-5949b0c946d9","Physical Modelling for Systems and Control: Lecture Notes Course sc4032, 2009-2010","Bosgra, O.H.","","2010","In these notes the formulation of models is aimed at obtaining a description of the dynamic behaviour of processes under transient conditions. This implies that we will formulate the equations of motion of the process variables that describe the evolution of the process as a function of time. Our models will formulate the process dynamics in a form as required for the understanding of process operations such as startup and shutdown, or for studying the transitions from one operating condition to another one as, e.g., required by grade changes in a production plant or by changes in the composition of the feedstock. Process dynamic models also are of great importance for providing control engineers with qualitative and quantitative descriptions of the transient behaviour of processes that are to be used in model based control system design.","systems and control; process engineering; physical modelling; process dynamics; transient behaviour","en","book","TU Delft","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","",""
"uuid:caa61b5a-b0ad-4904-ac1a-da95c8203047","http://resolver.tudelft.nl/uuid:caa61b5a-b0ad-4904-ac1a-da95c8203047","Local Derivative Post-processing: Challenges for a non-uniform mesh","Ryan, J.K.","","2010","Previous investigations into accuracy enhancement for the derivatives of a discontinuous Galerkin solution demonstrated that there are many ways to approach obtaining higher order accuracy in the derivatives, each with different advantageous properties [J.K. Ryan and B. Cockburn (2009), “Local Derivative Post-Processing for the Discontinuous Galerkin Methods.” Journal of Computational Physics, 228:8642-8664.]. For the discontinuous Galerkin method, the order of accuracy without post-processing for the dth?derivative is k+1-d. For the derivative of the post-processed solution it is 2k+1-d. Additionally, it was demonstrated that not only is calculating the derivative of the post-processed solution itself unnecessary, but also that O(h2k+1) can be obtained for the derivative solution for any order derivative, provided the solution is C2k+1. This is done using higher-order B-splines than used for the post-processed solution itself convolved against a finite difference derivative. This introduces higher levels of smoothness into the derivative post-processed approximation. However, this investigation was limited to a uniform mesh consideration, which is highly restrictive for practical applications. In this report, we discuss the advantages and disadvantages of extending accuracy enhancement of derivatives to non-uniform meshes in one-dimension using the ideas of local L2-projection, characteristic length as well as direct implementation as done for the post-processed solution itself in [S. Curtis, R. M. Kirby, J. K. Ryan, C.-W. Shu (2007), “Post-processing for the discontinuous Galerkin method over non-uniform meshes.” SIAM Journal on Scientific Computing. 30:272-289.].","accuracy enhancement; post-processing; derivatives; discontinuous Galerkin method; hyperbolic equations","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:229ece19-fd8c-4ca1-a38f-dd52c6aa3175","http://resolver.tudelft.nl/uuid:229ece19-fd8c-4ca1-a38f-dd52c6aa3175","Effective reverse conversion in residue number system processors","Gbolagade, K.A.","Sips, H.J. (promotor); Cotofana, S. (promotor)","2010","","residue number systems; data conversion; Chinese remainder; theorem; mixed radix conversion; digital signal processing","en","doctoral thesis","","","","","","","","2010-12-21","Electrical Engineering, Mathematics and Computer Science","Computer Engineering Laboratory","","","",""
"uuid:94d3fa96-6477-4c2b-bb60-49a88aac7465","http://resolver.tudelft.nl/uuid:94d3fa96-6477-4c2b-bb60-49a88aac7465","Coastal Erosion and Solutions: A primer","Kana, T.","TU Delft","2010","This primer describes some of the causes of coastal erosion and tries to put into perspective their scale and consequences. There are no uniform causes, just as there are no uniform solutions. Erosion tends to be site-specific. Yet, with careful observation and measurement, a particular problem can be placed in context and draw from the experience of similar sites. Given the variety of the world's coastlines, many other ""signatures of erosion"" beyond those mentioned here are at work. The attraction for scientists seeking to understand these signatures is the same as the casual tourist's - the ever-changing image of the shore","barrier islands; beach; berm; bulkhead; coast; groyne; coastal processes; nourishment; dunes; shoreline","en","report","Coastal Science & Engineering","","","","","","","","","","","","",""
"uuid:447d8e32-25f5-4d16-b1dd-f11cc245829c","http://resolver.tudelft.nl/uuid:447d8e32-25f5-4d16-b1dd-f11cc245829c","New to Improve: The Mutual Influence between New Products and Societal Change Processes","Joore, J.P.","Brezet, J.C. (promotor); Silvester, S. (promotor)","2010","The focus of design is changing rapidly, as new products are increasingly connected to each other and to the rest of the world. This means that the focus of the designer is less and less on the creation of tangible artifacts, and increasingly on the development of complex interconnected systems. These systems should preferably not only be “new and improved”, but also be “new to improve” society. To support this ambitious vision, a new multilevel design model is discussed that may provide insight in the mutual relationship between new products and the socio-technical and societal contexts in which these products function. This model is tested in two experiments: “Autonomous Elderly”, which links the development of an assisted living center to the Guide Me, a personal tracking system, and “Youth in Motion”, which links the development of a Sports Promotion Field Lab to the development of the Make Me Move, an interactive play floor.","multilevel design; system innovation; product-service system; socio-technical system; societal change process","en","doctoral thesis","VSSD","","","","","","","","Industrial Design Engineering","Design for Sustainability","","","",""
"uuid:aae3e171-cd2a-4700-b17c-1ba9ea005e8c","http://resolver.tudelft.nl/uuid:aae3e171-cd2a-4700-b17c-1ba9ea005e8c","Human Handheld-Device Interaction: An Adaptive User Interface","Fitrianie, S.","Koppelaar, H. (promotor); Rothkrantz, L.J.M. (promotor)","2010","The move to smaller, lighter and more powerful (mobile) handheld devices, whe-ther PDAs or smart-phones, looks like a trend that is building up speed. With numerous embedded technologies and wireless connectivity, the drift opens up unlimited opportunities in daily activities that are both more efficient and more exciting. Despite all these advancing possibilities, the shrinking size and the mobile use impose challenges for both technical and usability aspects of the devices and their applications. An adaptive user interface, that is able to autonomously adjust its display and available actions to current goals, contexts and emotions of its user, represents solutions for limited input options, various constraints of the output presentation, and user requirements due to mobility and attention shifting in human handheld-device interaction. The present work made preliminary steps in proposing a framework for a rapid construction of adaptive user interfaces that are multimodal, context-aware and affective, on handheld devices. The framework consists of predefined modules that are able to work in isolation but can also be connected in an ad hoc way as part of the framework. The modules deal with human handheld-device interaction, the interpretation of the user's actions, knowledge structure and management, the selection of appropriate responses and the presentation of feedback. Human language and visual perception models have been studied in formulating concepts or ideas as both text and visual language-based messages. An adaptive circular on-screen keyboard and visual language-based interfaces have been proposed as alternative input options for fast interaction. In particular, sentences in the visual language can be constructed using spatial arrangements of visual symbols, such as icons, lines, arrows and ellipses. As icons offer a potential across language barriers, any interaction using the visual language is suitable for language-independent contexts. Personalized predictive and language-based features have also been added to accelerate both input methods. An ontology has been chosen to represent knowledge of the user, the task and the world. The modeling and structure of the knowledge representation has been designed for sharing common semantics, integrating the communication inter-modules, and fulfilling the context aware requirement. It enables the framework to be developed into a widespread application for different domains. The context awareness is approached by interpreting both verbal and non-verbal aspects of user inputs to update the system's belief about the user, the task and the world. Methods and techniques to fuse multiple input modalities for multiple messages from multiple users into a coherence and context dependent interpretation have been developed. A simple approach to emotion analysis has been proposed to interpret the nonverbal aspect of the inputs. It is based on a keyword spotting approach by categorizing the emotional state into a certain valence orientation with intensity. The approach is suitable for a high uncertainties input recognition. Template-based interaction management and output generation methods have been developed. The templates have a direct link to concepts in the ontology-based knowledge representation. This approach supports a common semantic with other modules within the framework. It allows the development of a bigger scale system with consistent and easy to verify knowledge repositories. A multimodal, multi-user, and multi-device communication system in the field of crisis management built based on the framework has been developed as a proof of the proposed concepts. This system consists of comprehensive selected modules for reporting and collaborating observations using handheld devices in mobile ad-hoc network-based communication. It supports communication using the combination of text, visual language and graphics. The system is able to interpret user messages, construct knowledge of the user, the task and the world, and develop a crisis scenario. User tests were aimed at an assessment of whether or not users are capable of expressing their messages using the provided modalities. The tests also addressed usability issues on interacting with an adaptive user interface on handheld devices. The experimental results indicated that the adaptive user interface is able to support communication between users and between users and their handheld devices. Moreover, an explorative study within this research has also generated knowledge regarding (technical, social and usability aspects of) user requirements in adaptive user interfaces and (generally) human handheld-device interaction. The rationale behind our approaches, designs, empirical evaluations and implications for research on the framework for an adaptive user interface on handheld devices are also described in this thesis.","Human Computer Interaction; Adaptive User Interface; Artificial Intelligent; Natural Language Processing; Software Engineering; Multimodal System; Handheld Device Application; Software Framework; Knowledge Engineering; Situation Awareness; Communication","en","doctoral thesis","Mediamatics","","","","","","","2010-11-12","Electrical Engineering, Mathematics and Computer Science","Man Machine Interaction, Mediamatics","","","",""
"uuid:2a2b1a74-5372-4037-b15f-175f1b742582","http://resolver.tudelft.nl/uuid:2a2b1a74-5372-4037-b15f-175f1b742582","The N2 gateway project in Cape Town: Relocation or forced removal?","Newton, C.","","2010","","urban development , beautification processes, Cape Town, South Africa, mega?events","en","conference paper","","","","","","","","","","","","","",""
"uuid:5127253d-2acf-47b0-b4ec-d23ce5ad8267","http://resolver.tudelft.nl/uuid:5127253d-2acf-47b0-b4ec-d23ce5ad8267","A Processing Technique for OFDM-Modulated Wideband Radar Signals","Tigrek, R.F.","Van Genderen, P. (promotor); Ligthart, L.P. (promotor)","2010","The orthogonal frequency division multiplexing (OFDM) is a multicarrier spread-spectrum technique which finds wide-spread use in communications. The OFDM pulse compression method that utilizes an OFDM communication signal for radar tasks has been developed and reported in this dissertation. Using the ambiguity function tool, the feasibility of the OFDM pulse compression method was demonstrated from a performance perspective. The two fundamental components of the OFDM communication signal, namely the cyclic prefix guard interval and the random message content, are incorporated in the OFDM radar signal and the signal processing technique is developed to make use of these features rather than avoid them. The structure of the multicarrier signal and the unique outcome of the novel processing technique offer a new solution to the ambiguity in the Doppler measurements. The Doppler effect, which is considered as a cause of pulse compression loss, is compensated for in the frequency domain, and the additional information on the Doppler effect sue to the multicarrier structure helps solve the Doppler ambiguity occurring in the coherent integration stage. Two peakpower limiting techniques were considered for peak-power reduction; one method modulates the OFDM carriers by Golay complementary codes while the other method applies pre - coding by the DFT matrix to obtain the single-carrier OFDM (SC-OFDM). The assessment of these PAPR-control methods for radar applications generated novel results, and answers the valid concerns regarding the linearity of the power amplifiers. With higher radial velocity, the point target does not occupy a single range bin during the extent of the radar signal but migrates from one range bin to the next. The range migration occurs first at the level of coherent Doppler integration, where the OFDM pulse compression can still assume the Doppler effect to be a frequency shift. At higher velocities, the actual Doppler effect that is the scaling on the signal cannot be accurately modeled as a frequency shift anymore. Compensation techniques for both effects were developed in this thesis and verified through simulations.","OFDM; radar signal processing; spread spectrum radar; range migration; PAPR; Golay codes; radar communication","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","",""
"uuid:0c2a5614-0488-45f7-be3b-b9494b5c9e93","http://resolver.tudelft.nl/uuid:0c2a5614-0488-45f7-be3b-b9494b5c9e93","Chemical Leasing of solvents a sustainable approach for metal cleaning","Saecker, S.; Willms, L.","","2010","High quality solvent cleaning is indispensable for human progress especially in applications for peoples safety and security. Effective Risk Management enables the sustainable use of chlorinated solvents. SAFECHEM has implemented innovative business models like Chemical Leasing which enable industry to maintain the required surface cleaning quality. By combining best practice Risk Management with continuous process optimization, Chemical Leasing leads to a significant increase of customer satisfaction and solvent efficiency with virtually no emissions.","chemical leasing; risk management; process optimization; chemical products services (CPS); sustainable metal cleaning","en","conference paper","","","","","","","","","","","","","",""
"uuid:c4db9d54-9170-4477-9ac2-74322401d9a8","http://resolver.tudelft.nl/uuid:c4db9d54-9170-4477-9ac2-74322401d9a8","The mechanics of sustainable urban development: The Case of Lanxmeer, Culemborg, NL","Vernay, A.L.; Salcedo Rahola, T.B.; Ravesteijn, W.","","2010","Sustainable urban development projects are initiated almost every day. They are the results of very dynamic processes during which a number of decisions have to be taken according to the local conditions. Eventually each project follows its own path. The aim of this paper is to gain insight in the mechanics of sustainable urban development in order to better understand why certain pathways are chosen rather than others. To do so we propose a methodology to analyze sustainable urban developments and apply it to EVA-Lanxmeer, a project completed in Culemborg, the Netherlands. Ultimately, applying this methodology could help formulating recommendations for urban planners interested in starting and running such a process.","Sustainable urban development; methodology; urban pathway; process","en","conference paper","","","","","","","","","","","","","",""
"uuid:c9e3a31b-2174-4c28-8b85-d4c152de85ab","http://resolver.tudelft.nl/uuid:c9e3a31b-2174-4c28-8b85-d4c152de85ab","The CO2PE!-initiative (cooperative effort on process emissions in manufacturing): International framwork for sustainable production","Kellens, K.; Dewulf, W.","","2010","Manufacturing processes are responsible for a substantial part of the environmental impact of products but are still poorly documented in terms of their environmental footprint. The lack of thorough analysis of manufacturing processes has as consequence that optimization opportunities are often not recognized and that improved machine tool design in terms of ecological footprint has only been targeted for a few common processes. At the same time, a trend can be observed towards more energy intensive, unconventional processing techniques. In order to address these shortcomings, a worldwide consortium of universities and research institutes launched the CO2PE!Initiative. This initiative has as objective to coordinate international efforts aiming to document and analyze the overall environmental impact for a wide range of available and emerging manufacturing processes with respect to direct and indirect emissions, and to provide guidelines to improve these processes. In addition to life cycle analysis, in depth process analysis also provides insight in achievable environmental impact reducing measures towards machine builders and eco-design recommendations for product developers. In this paper, the CO2PE!-Initiative is described along with an overview of case studies to illustrate how the CO2PE! methodology works.","CO2PE!; unit process; environmental impact; sustainable manufacturing","en","conference paper","","","","","","","","","","","","","",""
"uuid:205de25c-051f-48a3-aa03-1ea4d22a726f","http://resolver.tudelft.nl/uuid:205de25c-051f-48a3-aa03-1ea4d22a726f","Shaping decisions and processes for more sustainable urban environments","Boyko, C.; Cooper, R.","","2010","To know whether eco-cities or, indeed, any so-called sustainable urban environment, will be successful, an understanding is needed about how such places 'come into being'. Understanding how decisions have been made, who makes them and when and how they are made is crucial to ensuring that the 'right' people have been involved at 'right' time. In describing and visualizing this, we are providing a framework-in this case, an urban design decision-making process-that highlights the stakeholders as well as the tensions, tradeoffs and decisions that need to be made in the name of shaping sustainable developments. This paper presents the findings from a large-scale research project about sustainable urban design decision-making and the 24-hour city. Through in-depth, case study research in three UK cities, the work identified and visualized a new framework for the urban design decision-making process as well as making crucial connections to urban form, the urban experience and urban policy. The project is described, highlighting the multi-disciplinary team approach and the diverse areas explored within the project. The three case study cities are then discussed briefly, followed by some of the distinct, area-focused results as well as some the integrated findings. In particular, the improved urban design process will be explained along with a description of some of the tools and techniques developed for urban design decision-makers.","urban design; decision-making; process; multi-disciplinary","en","conference paper","","","","","","","","","","","","","",""
"uuid:2b9fab9e-2ddf-403b-b806-b35d0e1c03c8","http://resolver.tudelft.nl/uuid:2b9fab9e-2ddf-403b-b806-b35d0e1c03c8","The role of human factors in the adoption of sustainable design criteria in business","Verhulst, E.; Boks, C.","","2010","Implementing sustainability in business is steadily gaining more attention. A growing number of companies currently work on the implementation of sustainable criteria in the design related departments and their design processes. Although theory and methods are available, practice shows that this integration process of sustainability criteria is not straightforward. In this paper, different cases from practice are described based on a study of five Flemish and three Dutch firms that are broadening their (sometimes already extensive) experience on this implementation process. The emphasis of this paper is put on influential factors such as the presence of an implementation process of sustainability criteria inside a firms product development department, with a focus on the need for a sustainability vision and strategy, resistance against sustainability and the link between internal communication and resistance. Our data suggest that a clear vision, mission, strategy and planning of the implementation process of sustainability criteria are needed, but not necessarily from the beginning of the process. Apart from that, factors of resistance appear to evolve throughout the implementation process that vary in nature (organizational versus personal) and content. Lastly, three types of communication are suggested that need to be considered and applied in order to involve, support and inform employees in order to positively progress into the direction of more sustainable products and processes.","Sustainability criteria; implementation process; human factors; product development process","en","conference paper","","","","","","","","","","","","","",""
"uuid:9f454d5b-7287-4fa3-9635-9405d05f96da","http://resolver.tudelft.nl/uuid:9f454d5b-7287-4fa3-9635-9405d05f96da","Low carbon solutions for the food industry","Schnitzer, H.; Muster-Slawitsch, B.; Brunner, C.","","2010","Globally seen, the agro-food sector is one of the great emitters of GreenHouseGases. Starting with agriculture, considering the transport, the processing and the retail, there are hundreds of options to reduce emissions to air, water and soil. Most processes in food industry take place at a rather low temperature below 100°C and at moderate pressures. This situation offers possibilities for solar thermal heating and the utilization of waste heat. The large amounts of organic waste in the whole chain imply potentials for biogas processes, as often as possible combined with a cogeneration of heat and power. Large amounts of energy in the agro-food sector are used for storage, cooling and freezing; here as well are possibilities for energy efficiency and the use of renewable energy. Taking also into account the energy efficient options for transport and the possibilities of reusing water from the production processes for irrigation, the whole system starts to be sustainable. Putting all these options together, one ends up with a low-carbon agro-food system, which in an optimal constitution ends up in a Zero-Emissions Agro-Food system.","food industry; biorefinery; low carbon; agriculture; process integration","en","conference paper","","","","","","","","","","","","","",""
"uuid:159f8072-c0c9-4461-8e7c-75b00b9db21f","http://resolver.tudelft.nl/uuid:159f8072-c0c9-4461-8e7c-75b00b9db21f","The mechanics of sustainable urban development: The case of Lanxmeer, Culemborg, NL","Vernay, A.L.; Salcedo Rahola, T.B.; Ravesteijn, W.","","2010","Sustainable urban development projects are initiated almost every day. They are the results of very dynamic processes during which a number of decisions have to be taken according to the local conditions. Eventually each project follows its own path. The aim of this paper is to gain insight in the mechanics of sustainable urban development in order to better understand why certain pathways are chosen rather than others. To do so we propose a methodology to analyze sustainable urban developments and apply it to EVA-Lanxmeer, a project completed in Culemborg, the Netherlands. Ultimately, applying this methodology could help formulating recommendations for urban planners interested in starting and running such a process.","sustainable urban development; methodology; urban pathway; process","en","conference paper","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:a4cf5848-a8cc-4afa-ba53-83f7e8c3aad9","http://resolver.tudelft.nl/uuid:a4cf5848-a8cc-4afa-ba53-83f7e8c3aad9","Personality factors at the base of sustainable behavior: A first approach across two universities","Juárez-Nájera, M.","","2010","The theoretical framework on the cognitive information processing holds that mental processes such as formation of beliefs, attitudes, or perceptions are impossible to directly observe and measure. This condition is resolved by implementing questionnaires integrated with hypothetical variables which can measure individual responses as real entities. Hypothetical variables related to individual personality features in this study were ascription of responsibility, universal values, personal skills, and awareness of consequences. Universal values discern on inherent conflicts among peoples motivational goals. Personal skills are concerned with the capacity to understand the intentions of others and oneself. Ascription of responsibility and awareness of consequences explain peoples desire to take action. The four factors are statistically highly related and proved explaining the construct of sustainable behavior (SB) (Juárez-Nájera, 2009). SB as is defined in this work is an effective disposition to act. It was tested by applying a 67-issue questionnaire among 106 participants, 69 Mexicans from the Universidad Autónoma Metropolitana, Azcapotzalco campus and 37 Germans from Leuphana Universitaet Lueneburg, Institut fur Umweltkommunication. SB was validated by means of an exploratory principal component analysis (PCA) which searches a factor structure underlying the hypothetical variables. To this end, this paper presents a pattern of the first components found by the PCA and the representative relations of the personality factors which explains sustainable behavior across participants in higher education institutions from two countries with vastly different cultures and economies. Ascription of responsibility appears as the main personal factor.","Cognitive information processing; personality latent variables; sustainable behavior at higher educational institutions","en","conference paper","","","","","","","","","","","","","",""
"uuid:e840db74-6870-44b9-8f4c-29b78355fcbe","http://resolver.tudelft.nl/uuid:e840db74-6870-44b9-8f4c-29b78355fcbe","Remanufactured fashions: A pathway to sustainability","Dissanayake, G.; Sinha, P.","","2010","There has been a significant increase in volume of new clothing sales over the last ten years; indeed it is the fastest growing waste in household waste stream, raising the potential for a similar increase in volume of textile waste dispose in landfill sites and the resultant harm to the environment. As volume of throwaway fashion increases and quality of fabric decreases, there is a need for an innovative approach to generating and managing this type of waste. Prior work on managing post-consumer textiles concurs with the Waste Hierarchy, ie, that reusing and remanufacturing fashion items makes the least impact on energy use. A number of fashion designers have developed businesses using this approach but are usually niche market, and the environmental benefit may not be as significant as the mass markets that are currently catered for by the large retailers using the current conventional design processes and supply chains. This paper will present and examine empirical data regarding design and remanufacturing processes as practiced by fashion designers in the niche market and the design processes within the large mass market retailers and manufacturers. The paper will then consider the current fashion supply and value chain, particularly issues around design and the use of technology within it to explore opportunities for incorporating remanufacturing approach within the conventional supply chain, identifying issues and providing recommendations. This examination will identify issues around design for social sustainability and design for sustainable behavior. The paper concludes with suggestions for future areas of study.","Recycle; remanufacturing; sustainability; fashion; design process","en","conference paper","","","","","","","","","","","","","",""
"uuid:0f1772a4-0b9c-4e07-9ec5-15e4cc879ecf","http://resolver.tudelft.nl/uuid:0f1772a4-0b9c-4e07-9ec5-15e4cc879ecf","Gamma distribution models for transit time estimation in catchments: Physical interpretation of parameters and implications for time?variant transit time assessment","Hrachowitz, M.; Soulsby, C.; Tetzlaff, D.; Malcolm, I.A.; Schoups, G.","","2010","In hydrological tracer studies, the gamma distribution can serve as an appropriate transit time distribution (TTD) as it allows more flexibility to account for nonlinearities in the behavior of catchment systems than the more commonly used exponential distribution. However, it is unclear which physical interpretation can be ascribed to its two parameters (?, ?). In this study, long?term tracer data from three contrasting catchments in the Scottish Highlands were used for a comparative assessment of interannual variability in TTDs and resulting mean transit times (MTT = ??) inferred by the gamma distribution model. In addition, spatial variation in the long?term average TTDs from these and six additional catchments was also assessed. The temporal analysis showed that the ? parameter was controlled by precipitation intensities above catchment?specific thresholds. In contrast, the ? parameter, which showed little temporal variability and no relationship with precipitation intensity, was found to be closely related to catchment landscape organization, notably the hydrological characteristics of the dominant soils and the drainage density. The relationship between ? and precipitation intensity was used to express ? as a time?varying function within the framework of lumped convolution integrals to examine the nonstationarity of TTDs. The resulting time?variant TTDs provided more detailed and potentially useful information about the temporal dynamics and the timing of solute fluxes. It was shown that in the wet, cool climatic conditions of the Scottish Highlands, the transit times from the time?variant TTD were roughly consistent with the variations of MTTs revealed by interannual analysis.","runoff processes; transit times; gamma distribution; chloride,; tracers","en","journal article","American Geophysical Union","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:533cae4d-3197-406e-9dde-bdb4eb36b42f","http://resolver.tudelft.nl/uuid:533cae4d-3197-406e-9dde-bdb4eb36b42f","Modeling Audio Fingerprints: Structure, Distortion, Capacity","Doets, P.J.O.","Lagendijk, R.L. (promotor)","2010","An audio fingerprint is a compact low-level representation of a multimedia signal. An audio fingerprint can be used to identify audio files or fragments in a reliable way. The use of audio fingerprints for identification consists of two phases. In the enrollment phase known content is fingerprinted, and ingested into a database, together with all relevant metadata. In the identification phase, unknown audio content is fingerprinted, and the fingerprints form the query to the database. The query fingerprint is compared to the fingerprints in the database. If a similar fingerprint is found in the database, the relevant metadata corresponding to the fingerprint is returned. In this thesis we develop models for audio fingerprints. The emphasis here is on fingerprint extraction and the properties of the fingerprint, not on matching the query fingerprint to the fingerprints in the database, and the actual identification. We also do not develop new practical fingerprinting algorithms. There is a wide variety of applications for audio fingerprinting, including broadcast monitoring, audience measurement, forensic applications, blacklisting of unauthorized content, 'name that tune' services and linking of special offers to television or radio commercials. Content which uses the same recorded source material, but which is in different representation, or distorted in different ways, will generate similar audio fingerprints. This distinguishes audio fingerprints from hashes and content-based retrieval. The hash of an audio file changes when one sample changes. Two perceptually equal audio items can have completely different hash values, but will generate similar fingerprints. Content-based retrieval looks for audio items which apply to a similar concept, like the same genre, artist or style, while fingerprinting looks for the reuse of the recorded content. Of course, the exact requirements for a fingerprinting system strongly depend on the application. Relevant aspects for the topics discussed in this thesis are the robustness, uniqueness, accuracy (notably the False Acceptance Rate and False Rejection Rate), granularity and the size of the fingerprints. In this thesis we make three contributions in the form of models. First, we model the structure of a particular type of audio fingerprint, the Philips Robust Hash (PRH). The PRH fingerprint extracts a series of spectral energy related features from the audio signal, which are represented efficiently but coarsely as a binary time-series. The time-series captures the temporal and spectral dynamics of the audio signal, and has a very particular structure mainly depending on a limited number of parameters in the fingerprint extraction. The model describes the structure of the PRH as a function of a number of parameters. It can be used for better understanding and potentially optimization of the fingerprinting system. We experimentally verify the model on synthetic Gaussian iid data, and conclude that the model capture the structure of the PRH fingerprint well. This analysis was reformulated and extended by Balado, Hurley, McCarthy and Silvestre. Second, we observe that distortions in the audio are reflected in changes in the corresponding fingerprint. This kind of distortion affects the quality of the audio signal and changes the resulting fingerprint. The idea is to estimate the amount of distortion on the audio signal by comparing the corresponding fingerprint to a reference fingerprint extracted from a high quality copy of the same audio. In this way one could extend the functionality of a fingerprinting system. We implement and compare the behaviour of a number of algorithms from literature, and observe similar behaviour of the distance between corresponding fingerprints due to compression. We model the effect of particular distortions in the audio due to compression or additive white noise on the difference introduced in the PRH fingerprints. The main result of our modeling effort is a closed form relation between Signal-to-Noise Ratio (SNR) and average fingerprint distance for PRH audio fingerprints of independent identically distributed (iid) signals. We also experimentally verify the developed models. The model fits perfectly for synthetic signals, and captures the behavior observed in a wider variety of fingerprinting algorithms on actual music. Third, we consider an information theoretical framework developed by Westover and O'Sullivan (WOS). The main question is `how many signals can be identified by a fingerprinting system, under certain conditions'. The conditions relate to characteristics of the fingerprint (size of the fingerprint, and representation of the fingerprint), and characteristics of the environment in which the system operates (representation and statistical characteristics of the signals that need to be identified, how much distortion is allowed). We use the results of the model developed for the PRH fingerprint to compare to estimate up to how many signals can be identified with a binary fingerprint like the PRH. Finally, we check whether the changes in the fingerprints we observe in practice due to distortions in the audio signals, and which have been modeled in this thesis, fit in the information theoretical framework of the WOS model. We outline the differences in the WOS-model compared to practical implementations. We finish with a list of recommendations on extending the models to take jointly consider distortion and uniqueness characteristics; to take more distortion types into account, and to extend to images and video; to develop an evaluation framework for audio fingerprinting; to integrate psycho-acoustics; and to develop a theoretical framework for comparing specific algorithms to the capacity bound.","audio fingerprint; robust hash; signal processing; information theory","en","doctoral thesis","","","","","","","","2010-10-20","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:f50bcf2a-97ec-4c39-99fd-5b09349256ae","http://resolver.tudelft.nl/uuid:f50bcf2a-97ec-4c39-99fd-5b09349256ae","Carrier multiplication and exciton behavior in PbSe quantum dots","Tuan Trinh, M.","Laurens Siebbeles, D.A. (promotor); Juleon Schins, M. (promotor)","2010","Knowledge of excited electronic states in semiconductor quantum dots (QDs) is of fundamental scientific interest and is important for application in lasers, optical detectors, LEDs, solar cells, photocatalysis, biomedical imaging, photodynamic therapy etc. In the past few years, carrier multiplication (CM) in QDs has received particular attention, due to promising prospects for exploitation in highly efficient solar cells, photodetectors and possibly photocatalysis. CM can occur when absorption of a high-energy photon leads to production of an excited electron or a hole with an excess energy that exceeds the QD band gap. CM involves transfer of (part of) the excess energy of the excited electron or hole to one or more valence electrons that also become excited across the band gap via a process denoted as impact ionization. In this way absorption of a single photon can lead to excitation of two or more electrons. This thesis describes studies of factors affecting the efficiency of CM in QDs based on PbSe. Ultrafast time-resolved optical pump and probe spectroscopy is used to characterize the nature of photoexcited states, the efficiency of CM, hot exciton cooling and Auger recombination of multiexcitons. The occurrence of CM has been reported by several research groups for QDs consisting of PbX (X = S, Se, Te), CdSe, InAs, and Si. However, in some other studies CM was not observed for CdSe, CdTe, and InAs QDs, thus raising legitimate doubts concerning the occurrence of CM in the other materials. In the work of chapter 2 conclusive evidence is given for the occurrence of CM in PbSe QDs. Possible artifacts due multi-photon absorption and charge trapping are excluded. It is shown that for higher exciton multiplicity a correct determination of the CM efficiency requires spectral integration over the photobleach feature. The CM efficiency of ?CM = 1.7 obtained at a photon energy of 4.8 times the band gap is close to results that have appeared in the literature more recently. Chapter 3 describes studies of the dynamics of hot excitons in PbSe QDs, PbSe/PbS core/shell QDs, and PbSe/PbSexS1-x core/alloyed-shell QDs. The ground state optical absorption exhibits a red-shift on introduction of a shell around a PbSe core, which increases with the thickness of the shell. According to electronic structure calculations, this can be attributed to electron delocalization into the shell. Remarkably, the CM efficiency, the hot exciton cooling rate, and the Auger recombination rate of multiexcitons are similar for PbSe core-only QDs and core/shell QDs with the same core size and varying shell thickness, despite the marked variations in the density of states evidenced by the changes in optical spectra. It is concluded that different effects that may serve to speed up or slow down exciton dynamics, such as variations in density of states, shell-induced asymmetry in the band structure and hot exciton cooling counteract one another. The second transition in the ground state optical absorption spectrum of PbSe QDs is arguably the most discussed optical transition in semiconductor QDs. Ten years of scientific debate have produced many theoretical and experimental claims for the assignment of this feature as the 1Pe1Ph as well as the 1Sh,e1Pe,h transitions. The studies described in chapter 4 show that the strength of the second optical transition in the absorption spectrum of PbSe QDs is not affected by the presence of 1Sh1Se excitons, even if four of those excitons are introduced. Hence, the second optical transition involves neither 1Se nor 1Sh states. This suggests that it is the 1Ph1Pe transition that gives rise to the second peak in the absorption spectrum of PbSe QDs. The transitions causing extinction in the energy region between the 1Sh1Se and 1Ph1Pe transitions was investigated, as described in chapter 5. The ultrafast transient absorption data indicate that the extinction in this region is not due to Rayleigh scattering, nor to local field effects, but to the formally forbidden 1Ph1Se and 1Sh1Pe transitions. These optical transitions can become allowed, due to deviations of the QD shape from ideal spherical symmetry. For applications, it is essential that multiple charges are extracted from multiexcitons generated within a QD, prior to decay by Auger recombination. Therefore the optical properties and decay kinetics of multiexcitons were studied. The results are presented in chapter 6. The first and second optical transitions in the ground state absorption spectrum of PbSe QDs are strongly shifted to the red as the number of 1Sh1Se spectator excitons increases. These red-shifts can be attributed to Coulomb interactions. The lifetimes before Auger decay of 1Sh1Se multiexcitons were determined. The population decay for 6.8 nm PbSe QDs could be described by assuming the Auger recombination rate to increase exponentially with the number of excitons in a QD. For smaller QDs, the exponential and another 3-charge interaction model reproduce the experimental data equally well. Further studies are needed to unravel the effects of QD size on the dependence of Auger recombination on the number of excitons.","Quantum dots; Carrier multiplication; PbSe; Solar cells; Ultrafast processes","en","doctoral thesis","","","","","","","","","Applied Sciences","ChemE","","","",""
"uuid:cdd1b24e-c4f0-4e44-a034-5a9e9216287c","http://resolver.tudelft.nl/uuid:cdd1b24e-c4f0-4e44-a034-5a9e9216287c","First passage percolation on random graphs with finite mean degrees","Bhamidi, S.; Van der Hofstad, R.; Hooghiemstra, G.","","2010","","flows; random graph; first passage percolation; hopcount; central limit theorem; coupling to continuous-time branching processes; universality","en","journal article","Institute of Mathematical Statistics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:19920864-606e-4cb1-9f67-9a278f62a1f8","http://resolver.tudelft.nl/uuid:19920864-606e-4cb1-9f67-9a278f62a1f8","Fast reconstruction and prediction of frozen flow turbulence based on structured Kalman filtering","Fraanje, P.R.; Rice, J.; Verhaegen, M.; Doelman, N.","","2010","Efficient and optimal prediction of frozen flow turbulence using the complete observation history of the wavefront sensor is an important issue in adaptive optics for large ground-based telescopes. At least for the sake of error budgeting and algorithm performance, the evaluation of an accurate estimate of the optimal performance of a particular adaptive optics configuration is important. However, due to the large number of grid points, high sampling rates, and the non-rationality of the turbulence power spectral density, the computational complexity of the optimal predictor is huge. This paper shows how a structure in the frozen flow propagation can be exploited to obtain a state-space innovation model with a particular sparsity structure. This sparsity structure enables one to efficiently compute a structured Kalman filter. By simulation it is shown that the performance can be improved and the computational complexity can be reduced in comparison with auto-regressive predictors of low order.","probability theory, stochastic processes, and statistics; active or adaptive optics; turbulence; wave-front sensing; active or adaptive optics","en","journal article","Optical Society of America","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:ba0cd4c2-388a-4cc8-9b76-bed680378e22","http://resolver.tudelft.nl/uuid:ba0cd4c2-388a-4cc8-9b76-bed680378e22","Beyond digital interference cancellation","Venkateswaran, V.","Van der Veen, A.J. (promotor)","2010","One of the major drawbacks towards the realization of MIMO and multi-sensor wireless communication systems is that multiple antennas at the receiver each have their own separate radio frequency (RF) front ends and analog to digital converter (ADC) units, leading to increased circuit size and power consumption. Improvements in RF and ADC technology happen at a much slower pace when compared to digital circuits, so that this problem is likely to be more critical in future. In a dense multi-user wireless communication setup, these multiple RF front ends and ADCs spend most of their power in processing signals from interfering users. The purpose of this research is to look at alternative mobile receiver architectures, from the joint perspective of a digital signal processing engineer as well as that of an RF designer. We start by specifying the need for a communion of RF and DSP techniques. We propose that advanced signal processing algorithms can be used in combination with existing circuit configurations, such as integrated phased arrays and multi-channel feedback ADCs, to perform analog interference cancellation. Interference cancellation allows for a reduced number of receiver chains and low resolution ADCs, hence reduced circuit size and power consumption. In summary, the research addresses the following questions: - Can we potentially reduce the cost and power dissipation of MIMO transceivers, by optimization across the RF-baseband borderline? - Can we design a flexible baseband platform that is tailored to low power circuits, demonstrating a potential for low cost in a dense multi-user setup? One approach to cancel interference in RF and to reduce the number of receiver chains in antenna array systems is to design RF phase shift combiners. An alternative is to integrate existing ADCs with a feedback beamformer (this setup is especially compatible with Sigma-Delta ADCs) to identify and cancel the interferer. Interference cancellation in the RF and in the mixed signal components of the receiver allows ADC units to represent the desired user more effectively for a fixed precision. For both the above mentioned architectures, we consider the hardware limitations and propose closed form solutions minimizing the overall mean squared distortion between the transmitted signals and its received estimate, and illustrate significant power savings in the receiver. In both the cases we also specify approximate solutions, when the closed form solutions are not feasible. Given such architectures, we propose techniques to estimate the changes in state of the wireless channel. Finally, we also specify that these approaches have the capacity to cancel the intermodulation products arising from the non-linearity of the RF components. On a higher level, it is imperative for the DSP engineer to abandon looking at ADCs and RF components as ""black boxes"" within a sensing/ communications system. For example, viewing a digitally assisted Sigma-Delta ADC as an equalizer or viewing multi-antenna RF circuits as integrated phased arrays to cancel interference may result in highly efficient joint solutions for mapping radio waves into the digital domain. Clearly, such hybrid architectures will result in DSP techniques driving the wireless revolution rather than being an afterthought for coping with the imperfections.","interference cancellation; beamforming; signal processing; wireless communications; ADCs; RF phase shifters","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Circuits and Systems","","","",""
"uuid:945f41a8-8c8b-497a-bdb3-a84477dd1215","http://resolver.tudelft.nl/uuid:945f41a8-8c8b-497a-bdb3-a84477dd1215","Added-value milk fat derivatives from integrated processes using supercritical technology","Lubary Fleta, M.","Jansens, P.J. (promotor)","2010","Milk fat has a very rich chemical composition and unique organoleptic properties. It is the only relevant natural source of short-chain fatty acids (C4 to C10), which have been associated to several health effects. Milk fat also contains a series of minor, bioactive lipids with anticarcinogenic, antidepressant and bactericidal activity. In the last decades, however, the consumption of milk fat in developed countries has decreased gradually, mainly due to its relatively high price compared to margarines and a negatively perceived health image, derived from its content of cholesterol and saturated fatty acids. Consequences of the consumption decline are the accumulation of milk fat stocks, which leads to instability in the dairy sector. A promising strategy for the revalorization of milk fat encompasses a more effective use of both its major and minor components. The aim of this thesis was to conceive and develop novel routes for the synthesis of added-value derivatives from milk fat, focusing on the use of the major components of milk fat (fatty acids) and on the preservation of its natural characteristics (flavor, aroma, texture). To this end, enzymatic or physical modifications of milk fat were applied. The use of supercritical carbon dioxide as a processing solvent in reaction, extraction and micronization operations was explored in this context, with a special focus on the possibility of process integration.","supercritical carbon dioxide; milk fat; innovative processing","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:88238354-82de-43ee-b05f-750cd41bbf32","http://resolver.tudelft.nl/uuid:88238354-82de-43ee-b05f-750cd41bbf32","A Pure Object-Oriented Embedding of Attribute Grammars","Sloane, A.M.; Kats, L.C.L.; Visser, E.","","2010","Attribute grammars are a powerful specification paradigm for many language processing tasks, particularly semantic analysis of programming languages. Recent attribute grammar systems use dynamic scheduling algorithms to evaluate attributes by need. In this paper, we show how to remove the need for a generator, by embedding a dynamic approach in a modern, object-oriented programming language to implement a small, lightweight attribute grammar library. The Kiama attribution library has similar features to current generators, including cached, uncached, circular, higher-order and parameterised attributes, and implements new techniques for dynamic extension and variation of attribute equations. We use the Scala programming language because of its combination of object-oriented and functional features, support for domain-specific notations and emphasis on scalability. Unlike generators with specialised notation, Kiama attribute grammars use standard Scala notations such as pattern-matching functions for equations and mixins for composition. A performance analysis shows that our approach is practical for realistic language processing.","language processing; compilers; domain-specific languages","en","journal article","Elsevier","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Computer Technology","","","",""
"uuid:f8d8b93e-d90c-4d44-addf-1873edf600ff","http://resolver.tudelft.nl/uuid:f8d8b93e-d90c-4d44-addf-1873edf600ff","A perspective on 3D surface-related multiple elimination","Dragoset, B.; Verschuur, D.J.; Moore, I.; Bisley, R.","","2010","Surface-related multiple elimination (SRME) is an algorithm that predicts all surface multiples by a convolutional process applied to seismic field data. Only minimal preprocessing is required. Once predicted, the multiples are removed from the data by adaptive subtraction. Unlike other methods of multiple attenuation, SRME does not rely on assumptions or knowledge about the subsurface, nor does it use event properties to discriminate between multiples and primaries. In exchange for this “freedom from the subsurface,” SRME requires knowledge of the acquisition wavelet and a dense spatial distribution of sources and receivers. Although a 2D version of SRME sometimes suffices, most field data sets require 3D SRME for accurate multiple prediction. All implementations of 3D SRME face a serious challenge: The sparse spatial distribution of sources and receivers available in typical seismic field data sets does not conform to the algorithmic requirements. There are several approaches to implementing 3D SRME that address the data sparseness problem. Among those approaches are pre-SRME data interpolation, on-the-fly data interpolation, zero-azimuth SRME, and true-azimuth SRME. Field data examples confirm that (1) multiples predicted using true-azimuth 3D SRME are more accurate than those using zero-azimuth 3D SRME and (2) on-the-fly interpolation produces excellent results.","geophysical signal processing; seismology","en","journal article","Society of Exploration Geophysicists","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:af983e8d-aa01-4fa9-9b70-a28a88bc227f","http://resolver.tudelft.nl/uuid:af983e8d-aa01-4fa9-9b70-a28a88bc227f","Articulated Whole-Body Atlases for Small Animal Image Analysis: Construction and Applications","Khmelinskii, A.; Baiker, M.; Kaijzel, E.L.; Chen, J.; Reiber, J.H.C.; Lelieveldt, B.P.F.","","2010","Purpose Using three publicly available small-animal atlases (Sprague–Dawley rat, MOBY, and Digimouse), we built three articulated atlases and present several applications in the scope of molecular imaging. Procedures Major bones/bone groups were manually segmented for each atlas skeleton. Then, a kinematic model for each atlas was built: each joint position was identified and the corresponding degrees of freedom were specified. Results The articulated atlases enable automated registration into a common coordinate frame of multimodal small-animal imaging data. This eliminates the postural variability (e.g., of the head, back, and front limbs) that occurs in different time steps and due to modality differences and nonstandardized acquisition protocols. Conclusions The articulated atlas proves to be a useful tool for multimodality image combination, follow-up studies, and image processing in the scope of molecular imaging. The proposed models were made publicly available.","small animal imaging; C57BL/6; C3H mouse; SD rat; articulated atlas; image processing; registration; microCT; BLI; microMRI","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","","",""
"uuid:6f288447-543c-4a08-9c98-d8520dec0646","http://resolver.tudelft.nl/uuid:6f288447-543c-4a08-9c98-d8520dec0646","Local derivative post-processing: Challenges for a non-uniform mesh","Ryan, J.K.","","2010","Previous investigations into accuracy enhancement for the derivatives of a discontinuous Galerkin solution demonstrated that there are many ways to approach obtaining higher order accuracy in the derivatives, each with different advantageous properties. For the discontinuous Galerkin method, the order of accuracy without post-processing for the dth?derivative is k+1-d. For the derivative of the post-processed solution it is 2k+1-d. Additionally, it was demonstrated that not only is calculating the derivative of the post-processed solution itself unnecessary, but also that order 2k+1 can be obtained for the derivative solution for any order derivative, provided the solution is 2k+1 continuous. This is done using higher-order B-splines than used for the post-processed solution itself convolved against a finite difference derivative. This introduces higher levels of smoothness into the derivative post-processed approximation. However, this investigation was limited to a uniform mesh consideration, which is highly restrictive for practical applications. In this report, we discuss the advantages and disadvantages of extending accuracy enhancement of derivatives to non-uniform meshes in one-dimension using the ideas of local L2-projection, characteristic length as well as direct implementation as done for the post-processed solution itself.","accuracy enhancement; discontinuous Galerkin; post-processing; derivatives; hyperbolic equations","en","report","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:cb8bfdf7-5440-41ac-a8f2-7bea858b730a","http://resolver.tudelft.nl/uuid:cb8bfdf7-5440-41ac-a8f2-7bea858b730a","Bed composition generation for morphodynamic modeling: Case study of San Pablo Bay in California, USA","Van der Wegen, M.; Dastgheib, A.; Jaffe, B.E.; Roelvink, J.A.","","2010","Applications of process-based morphodynamic models are often constrained by limited availability of data on bed composition, which may have a considerable impact on the modeled morphodynamic development. One may even distinguish a period of “morphodynamic spin-up” in which the model generates the bed level according to some ill-defined initial bed composition rather than describing the realistic behavior of the system. The present paper proposes a methodology to generate bed composition of multiple sand and/or mud fractions that can act as the initial condition for the process-based numerical model Delft3D. The bed composition generation (BCG) run does not include bed level changes, but does permit the redistribution of multiple sediment fractions over the modeled domain. The model applies the concept of an active layer that may differ in sediment composition above an underlayer with fixed composition. In the case of a BCG run, the bed level is kept constant, whereas the bed composition can change. The approach is applied to San Pablo Bay in California, USA. Model results show that the BCG run reallocates sand and mud fractions over the model domain. Initially, a major sediment reallocation takes place, but development rates decrease in the longer term. Runs that take the outcome of a BCG run as a starting point lead to more gradual morphodynamic development. Sensitivity analysis shows the impact of variations in the morphological factor, the active layer thickness, and wind waves. An important but difficult to characterize criterion for a successful application of a BCG run is that it should not lead to a bed composition that fixes the bed so that it dominates the “natural” morphodynamic development of the system. Future research will focus on a decadal morphodynamic hindcast and comparison with measured bathymetries in San Pablo Bay so that the proposed methodology can be tested and optimized.","process-based model; morphodynamic prediction; bed composition; estuarine processes; San Pablo Bay; data scarcity; coastal geomorphology","en","journal article","Springer","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:2458b3cd-547f-47b3-949f-e5b3db27caa2","http://resolver.tudelft.nl/uuid:2458b3cd-547f-47b3-949f-e5b3db27caa2","What determines nearshore sandbar response?","Smit, M.; Reniers, A.; Stive, M.J.F.","","2010","Nearshore sandbars appear with various patterns which may change over time. From observations, these changes seem to be related to changes in hydrodynamic conditions, although observed length scales could not be related directly to occurring wave conditions. The current work investigated the role of both the concurrent and previous hydrodynamics as well as the role of the pre-existing morphological variability of a nearshore bar system. A suite of modeling efforts using a depth-averaged process-based model was analysed on predicted length scales, response times and evolving levels of variability. It was found that with small or moderate hydrodynamic forcing, an existing pattern would remain. Only when the existing pattern was alongshore uniform, the bar pattern would change in response to the conditions. When the hydrodynamic conditions are extreme, an existing pattern can be erased, resulting in an alongshore uniform bathymetry – a reset-event","nearshore sandbars, morphodynamics, patterns, process-based modeling","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:db31899c-ca91-47a1-879e-4f4fa1d004e8","http://resolver.tudelft.nl/uuid:db31899c-ca91-47a1-879e-4f4fa1d004e8","A process-based approach to sediment transport in the Yangtze estuary","Chu, A.; Wang, Z.B.; De Vriend, H.J.; Stive, M.J.F.","","2010","A process-based model for the Yangtze Estuary is constructed to study the sediment transport in the estuary. The proposed model covers the entire tidal region of the estuary, the Hangzhou Bay and a large part of the adjacent sea. The dominant processes, fluvial and tidal, are included in the model. The calibration of the model against extensive flow, water level, salinity and suspended sediment data shows a good representation of observed phenomena. With the present calibrated and validated model, the residual flow field and the residual sediment transport field are obtained. The residual sediment transport pattern gives insight into the morphological behaviour of the mouth bars.","Yangtze Estuary; mouth bar; morphology; sediment transport; process-based model","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:81051152-6d19-4504-b2ea-280e5c727be3","http://resolver.tudelft.nl/uuid:81051152-6d19-4504-b2ea-280e5c727be3","Warning citizens; influencing self-reliance in emergencies","Sillem, S.","Ale, B.J.M. (promotor)","2010","An important part of the response to an emergency is making sure that people are able to take themselves and others to a place of safety. To make people aware that there is an emergency, there are three steps that have to be taken: there has to be a warning that something is going on, people have to perceive and process that warning, and finally, people have to perform the self-reliant behaviour that will get them to a safe place. Self-reliance can be increased when people are motivated to comply with instructions that are given in an emergency. This thesis is about finding out what factors influence self-reliance in an emergency and how these influencing factors can be investigated so that the total effectiveness of a warning system can be determined. The research question is: How can the way in which a new or existing warning system effectively influences citizens’ self-reliance in an emergency be investigated? A model was constructed which shows the steps of warning information processing in which self-reliance can be influenced. This model is called the contextual human information processing model, as it shows the influences on self-reliance in terms of the interactions between cognition, affective states and situational variables. The model looks at issues inside (HIP, Personal characteristics and behaviour) and outside (situational characteristics and warning) the human. The model produces a list of influencing factors that have to be investigated when determining the effectiveness of a warning system.","warning citizens; self-reliance; cell broadcast; siren; warning system; human information processing","en","doctoral thesis","","","","","","","","2010-06-28","Technology, Policy and Management","Safety Science","","","",""
"uuid:8bc685df-fcf4-4ed8-8ef3-a11e75f82c76","http://resolver.tudelft.nl/uuid:8bc685df-fcf4-4ed8-8ef3-a11e75f82c76","Pacman","Wilmer, D.W.H.; De Ridder, G.R.; Kol, A.A.; Harkes, D.C.","","2010","This bachelor report describes the creation of a fun and informative robot game for the Science Centre Delft. The project entails simulating the classic Pacman computer game using real robots. One robot, Pacman, is controlled by a player and has to fulfil certain assignments. Simultaneously the other robots, monsters, try to catch Pacman. A camera captures the movements of all the robots and with the help of image processing the camera input is used to run a control program. The same image is also used to display information on monitors. This gives the player and other visitors an insight in the inner workings of the game.","pacman; science centre; robots; signal processing","en","report","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Science","","","",""
"uuid:a8495e87-4134-4e1b-93cc-b0ba4163b2b0","http://resolver.tudelft.nl/uuid:a8495e87-4134-4e1b-93cc-b0ba4163b2b0","Secure signal processing: Privacy preserving cryptographic protocols for multimedia","Erkin, Z.","Lagendijk, R.L. (promotor)","2010","Recent advances in technology provided a suitable environment for the people in which they can benefit from online services in their daily lives. Despite several advantages, online services also constitute serious privacy risks for their users as the main input to algorithms are privacy sensitive such as demographic information, shopping patterns, medical records, etc. While traditional security mechanisms can eliminate a number of attacks from outside, these mechanisms can not protect the privacy of the users as the service provider itself constitutes the biggest potential risk. In this thesis, we focus on principled solutions to protect the privacy of users in multimedia applications. For this purpose we propose to keep the privacy-sensitive data safe by means of encryption during processing. This approach eliminates the risk of possible privacy abuse as the sensitive data is only available to the owner but no other party. However, once encrypted, the structure in data is destroyed as a consequence of the encryption procedure and thus we need appropriate tools to process encrypted data. Therefore, we focus on a number of cryptographic tools such as homomorphic encryption schemes and multiparty computation (MPC) techniques to realize privacy-preserving multimedia applications. The proposed principled solutions consider the signal processing aspect of the multimedia applications which is a new idea to the best of our knowledge. In particular, we focus on a number of prototypical applications namely, face detection, user clustering in a social network, recommendation generation and anonymous fingerprinting. Based on these selected applications, we addressed the major challenges for secure signal processing: data representation, data expansion, realizing linear and non-linear operations and efficiency of the proposed protocols in terms of communication and computational costs. We propose to scale and round the signal values prior to encryption as these operations are highly inefficient to be realized in the encrypted domain. Moreover, we reserve sufficient space in terms of bit length for each signal sample to accommodate the possible expansion in bit size in the subsequent processing steps. However, reserving more bits for signals does not contradict with the data expansion problem. As the cipher text space is much larger than the size of the original -- and even scaled -- signal samples, data expansion after encryption increases data transmission and storage costs significantly. In order to minimize the cost we propose to pack a number of signal samples in one encryption and process them when they are in the packed form. This approach requires cryptographic protocols particularly designed for the packed data but in the end saves considerable resources regarding bandwidth and storage capacity, even computational power. Homomorphism plays a crucial role in our proposed solutions. With the help of homomorphic encryption, we are able to implement linear operations such as correlation and projection without interaction. However, linear operations are only a part of the signal processing. For the non-linear operations like distance computation, thresholding and comparison, we exploit MPC techniques. These techniques are often interactive and computationally expensive compared to the original systems in plain. However, by using data packing and designing the protocols with care, the communication and computational costs were reduced significantly. In this thesis, we have shown that preserving privacy for multimedia signal processing is feasible. We determined the major challenges of secure signal processing and combined a set of cryptographic tools successfully with signal processing to realize the applications in the encrypted domain. The proposed solutions demonstrate that the privacy concerns in multimedia signal processing applications can be coped with by using cryptographic tools. Moreover, protocols that are designed to realize certain operations in the encrypted domain can be used in other applications and settings with a number of modifications.","Privacy; Secure multi-party computation; homomorphic encryption; secure signal processing","en","doctoral thesis","TU Delft Mediamatica","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:85a89c91-3472-4a1f-a7b7-092f707b3256","http://resolver.tudelft.nl/uuid:85a89c91-3472-4a1f-a7b7-092f707b3256","Automatic sign language recognition inspired by human sign perception","Ten Holt, G.A.","Reinders, M.J.T. (promotor); De Ridder, H. (promotor)","2010","Automatic sign language recognition is a relatively new field of research (since ca. 1990). Its objectives are to automatically analyze sign language utterances. There are several issues within the research area that merit investigation: how to capture the utterances (cameras, magnetic sensors, instrumented gloves), how to extract interesting information from the captured data, and how to classify signs or sentences automatically using the extracted information. These issues are of an immediate and basic nature, and must be solved before any automatic recognition of sign language can be achieved. But other issues, pertaining to the nature of sign language and human recognition, are no less interesting: which elements of a sign are important for the meaning of an utterance? How do consecutive signs influence one another? Why are certain types of variation unimportant while others change the meaning of the sign? Automatic sign language recognition has, until recently, mostly focused on the first set of issues. In this thesis, we attempt to integrate knowledge about sign languages and human sign recognition into the automatic sign recognition process. Research on the (psycho)linguistics of sign languages is itself quite young (since ca. 1960), and many questions as yet unanswered. For this reason, we conduct our own studies of human sign language recognition. The knowledge gained from these experiments is applied in an existing automatic sign language recognition system. The thesis is divided into two parts: the first part describes the experiments conducted with human signers, the second part describes experiments investigating the possibilities of integrating such knowledge in the automatic recognizer. This recognizer is meant to be used in an interactive environment for young children to practice sign language vocabulary. For this reason, it is vision-based (which is unobtrusive), and only handles isolated signs. The experiments in part I of the thesis investigate the information content of various sign elements: fragments of a sign in time (chapter 2), and the sign aspects handshape and hand orientation (chapter 3). In time, the central phase of a sign is the most informative one, equally informative to the entire sign. Recognition based on other phases is also possible to a certain extent, and the transition from the preparation phase to the central phase appears to be a salient moment. As for the aspects, the aspect handshape proves more useful for recognition than hand orientation. Chapter 4 gives an overview of the human recognition research and discusses possibilities for application. In part II, the possibilities of utilizing the results of part I in the recognition system are investigated. Chapter 5 describes the addition of the handshape feature to the system (which chapter 3 showed to be the most interesting feature to add). Adding handshape gives a small improvement in the recognition performance. In chapter 6, the salience of the sign fragments used in chapter 2 for the automatic recognizer is investigated. The central phase proves to be the most informative one, as it was for human signers. Chapter 7 describes experiments in which a small set of frames is used to represent a sign. The results show a deterioration in recognition performance. Strict demands on the correctness of the remaining frames are probably partly responsible for the performance decrease. In conclusion, we can say that applying human knowledge in automatic sign language recognition is a complex task. Conclusions about human sign recognition do not necessarily hold for the automatic recognizer as well. The most important obstacles for utilizing information successfully seem to be: 1) data acquisition: computer vision is not as accomplished as human observers in capturing the complex, dynamic hand and face motions that form sign language. This means that information that is present in a sign movement for a human being may not be (correctly) observed by an automatic vision analysis system. Thus, the data that humans work with is not necessarily identical to the data the recognizer works with, and this may cause techniques that are successful for human signers to fail in the automatic system. And 2) differences in basic system architecture. Research into human sign recognition is still ongoing, there is no clear model of human sign recognition yet. This makes it more difficult to translate observations from human sign recognition to the automatic recognizer: human signers may use techniques that are not compatible with the current architecture of the recognizer. For example: human signers may process aspects independently. If the recognition system processes all data as a single stream, then such a technique cannot be implemented. A more thorough understanding of human sign recognition, more sophisticated computer vision techniques, and a close co-operation between the fields of automatic sign language recognition and human sign perception, seems the best way to overcome these obstacles.","sign language; automatic recognition; sign perception; language processing","en","doctoral thesis","G.A. ten Holt","","","","","","","2010-06-03","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:9270359b-6d4f-4193-a08e-39c1ea878fb1","http://resolver.tudelft.nl/uuid:9270359b-6d4f-4193-a08e-39c1ea878fb1","‘Change for the Better?’—making sense of housing association mergers in the Netherlands and England","Van Bortel, G.; Mullins, D.; Gruis, V.","","2010","Mergers among housing associations have become a frequent phenomenon in both the Netherlands and England. The general literature on mergers highlights the need for research to consider the wider political and business environment, managerial motives and strategic choices, to adopt a process perspective and to evaluate outcomes in relation to competing definitions of goals and success criteria. This article applies these perspectives to consider drivers for and experience of housing association mergers in the Netherlands and England, competing motivations such as efficiency savings in relation to borrowing and procurement costs, improved professionalism and organisational capacity and external influence. We discuss the pace and motivations of mergers, the expected positive and negative effects, and actual outcomes. We focus on the impact of mergers on stakeholder satisfaction, housing production and operational costs. Based on our findings we discuss the implications for policies and practice in both countries. Our main conclusion is that the relationship between the size of housing associations and their performance is not straightforward. This is partly because large and small associations are generally trying to do different things in different ways and have contrasting strengths and weaknesses; thus judgements about whether mergers and concentration of ownership in third sector housing is a change for the better are dependent upon considerations of underlying purposes and success criteria.","Housing associations; Mergers; Motives; Process; Outcomes","en","journal article","Springer","","","","","","","","OTB Research Institute for the Built Environment","","","","",""
"uuid:fbf8e70f-5c32-4de9-94ac-ed3c0d69431a","http://resolver.tudelft.nl/uuid:fbf8e70f-5c32-4de9-94ac-ed3c0d69431a","Development and prototype application of an oil spill risk analysis in a coastal zone","Tsimopoulou, V.","","2010","This paper introduces the development of a methodology for performance of oil spill risk analysis in coastal zones through a prototype application. The main objective of the research effort is to develop the basis for a tool that can assess risks due to the occurrence of an oil spill event aiming at assisting to the risk response process. The methodology concerns the processes of probability and consequence assessment. The two processes are accomplished qualitatively with a risk prioritization based on Analytic Hierarchy Process. Being a decision-making technique, Analytic Hierarchy Process can only be used after some appropriate modifications, which transform it into a tool for prioritizing risks with respect to their probability and consequence in different oil spill scenarios. This is an approach that attempts to rationalise the risk analysis stages and to indicate the uncertainties imposed to the problem, hence creating a basis for optimization of the risk analysis results.","risk analysis; oil spill; analytic hierarchy process","en","conference paper","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:4055c4ba-5b9f-462b-9a93-0431da2ffe0b","http://resolver.tudelft.nl/uuid:4055c4ba-5b9f-462b-9a93-0431da2ffe0b","On the development of Agent-Based Models for infrastructure evolution","Nikolic, I.; Dijkema, G.P.J.","","2010","Infrastructure systems for energy, water, transport, information etc. are large scale socio-technical systems that are critical for achieving a sustainable world. They were not created at the current global scale at once, but have slowly evolved from simple local systems, through many social and technical decisions. If we are to understand them and manage them sustainably, we need to capture their full diversity and adaptivity in models that respect Ashby's law of requisite variety. Models of evolving complex systems must themselves be evolving complex systems that can not be created from scratch but must be grown from simple to complex. This paper presents a socio-technical evolutionary modeling process for creating evolving, complex agent based models for understanding the evolution of large scale socio-technical systems such as infrastructures. It involves the continuous co-evolution and improvement of a social process for model specification, the technical design of a modular simulation engine, the encoding of formalized knowledge and collection of relevant facts. In the paper we introduce the process design, the requirements for guiding the evolution of the modeling process and illustrate the process for Agent Based Model development by showing a series of ever more complex models.","agent based modeling; evolution; evolutionary process design; infrastructure evolution","en","journal article","Inderscience","","","","","","","2010-10-01","Technology, Policy and Management","Section Energy and Industry","","","",""
"uuid:07dbeb8f-8408-4b80-a3dc-a09b9f0b4ff8","http://resolver.tudelft.nl/uuid:07dbeb8f-8408-4b80-a3dc-a09b9f0b4ff8","Formal analysis of design process dynamics","Bosse, T.; Jonker, C.M.; Treur, J.","","2010","This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design properties at different levels of aggregation. This approach is specifically suitable for component-based design processes. A complicating factor for supporting the design process is that not only the generic properties of design must be specified, but also the language chosen should be rich enough to allow specification of complex properties of the system under design. This requires a language rich enough to operate at these different levels. The Temporal Trace Language used in this paper is suitable for that. The paper shows that the analysis at the level of a design process as a whole and at subprocesses thereof is precise enough to allow for automatic simulation. Simulation allows the modeler to manipulate the specifications of the system under design to better understand the interlevel relationships in his design. The approach is illustrated by an example.","declarative modeling; design processes; dynamics; logical analysis; simulation","en","journal article","Cambridge University Press","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:4416dd48-9829-4af0-b678-8fcd8e87788a","http://resolver.tudelft.nl/uuid:4416dd48-9829-4af0-b678-8fcd8e87788a","Trapped-Victim Detection in Post-Disaster Scenarios using Ultra-Wideband Radar","Nezirovi?, A.N.","Ligthart, L.P. (promotor); Yarovoy, A.G. (promotor)","2010","Rescue dogs are commonly used during the urban search-and-rescue (USAR) operations for the initial indication on the presence of trapped victims after the collapse of man-made structures. However, dogs are not able to inform the rescue crews whether the trapped victims are alive or not and where exactly they are located. Other complementary tools, such as acoustic- and audio-visual equipment, are prone to inaccuracy, interference and inadequate range of operation. Ultra-wideband (UWB) radar is considered a promising tool for more exact assessment on the range of trapped victims. However, implementation of UWB radar for trapped-victim detection faces challenges such as low signal-tonoise ratio (SNR) conditions, interference from non-stationary clutter, residues due to amplitude instability originating in the equipment as well as narrowband radio interference. There are four commercially available UWB radar technologies and it is not clear which radar technology is the most suited one for the purpose of detecting trapped victims. There is very little available knowledge on the two target features that enable detection of a trapped human body using radar (respiratory- and cardiac motion). In need of further investigation is the choice of the optimal operational frequency band as well as assessment on the amount of attenuation of a few obstacles that represent, to various accuracy, real-life rubble. Chapter 2 introduces the reader with the basic principles of generation, sampling and pre-processing of UWB signals for the four available UWB radar technologies. It investigates the applicability of two time-domain- and the continuous-wave (CW) UWB radar technology for the purpose of trapped-victim detection, both based on the inherent properties as well as by means of an experimental verification study evaluated under as similar measurement conditions as possible. The results of both the theoretical and experimental verification indicate that the CW UWB radar technology is to prefer over the time-domain radar technologies due to generally higher dynamic range, better use of the designated spectrum and higher transmit power as well as that it enables the extraction of two target features, as opposed to only one (respiratory motion). The study is not definitive nor final and should serve as guidelines for further studies and/or system design. Chapter 3 investigates in detail the time-domain and frequency-domain behaviour of the two available target features for various body positions. It shows that the respiratory motion responses are in average 13 dB stronger than the cardiac motion responses. The position such that the chest is turned toward the receive antenna produces strongest respiratory motion responses due to larger chest displacement and reflective area than the other positions. Detectability of respiratory motion responses as function of aspect angle was investigated under line-of-sight conditions for three body positions, four bi-static angles and three antenna-pair polarisations using a single test person. It showed that there is no considerable difference in detectability among the investigated bi-static angles and co-polarised antenna pairs. However, it was concluded that cross-polarised antenna pairs should be avoided in real life as they produce significantly lower detectability values. The attenuation as function of frequency of two types of obstacles (piles of sandstone blocks and a 60-cm concrete wall) was investigated in chapter 4. The results show that the attenuation for both materials is ca 10-15 dB across the frequency range of interest. However, realistic rubble thicknesses and types of rubble can heavily increase the attenuation and thereby lower the probability of detection. Measurements involving a test person resting under 80-cm concrete rubble pile and behind two concrete walls, showed that the centre frequency well below 1 GHz gives rise to highest SNR values. Bandwidths of ca. 400 MHz centred at frequencies below 1 GHz give rise to higher SNR values than larger investigated bandwidths. On the hand, lower bandwidths result in poorer down-range resolution, which is necessary for resolving non-stationary clutter responses or multiple trapped victims. One of the fundamental tasks of this thesis is the development of a respiratory motion detection algorithm. Chapter 5 details a novel and computationally efficient algorithm which is able to improve SNR conditions and better suppress non-stationary clutter compared to an existing algorithm, assessed both experimentally and in a simulated environment. The algorithm further incorporates a threshold which aids in the decision making process by the operator. The performance of three common stationary-clutter suppression methods is investigated on a single measured data set containing respiratory motion and linear amplitude instability (linear trend). It was shown that the linear-trend removal method, which removes any potential linear trend and DC level in the slow-time dimension, is the preferred approach to stationary-clutter suppression. Narrowband interference (NBI) results in increase of noise floor and thereby worsens the probability of detection, when using stroboscopic sampling (such as in impulse radar). Chapter 6 analyses the performance of four developed methods for NBI suppression implemented in stroboscopic samplers. The most suitable method for NBI suppression in stroboscopic samplers is to filter out the NBI in the analogue domain and, after sampling, implements linear interpolation of the missing spectrum in order to avoid ringing of the backscattered waveforms from the victim, is regarded the most suitable method. It shows an improvement factor of 12.9 dB in noise reduction and manages to preserve the signal waveform and energy very well. The thesis is completed by the conclusions and recommendations for future studies.","victim detection; radar signal processing; ultra-wideband radar; uwb; search-and-rescue","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","",""
"uuid:ae45e55b-1374-4d81-9df5-3104cf5e2905","http://resolver.tudelft.nl/uuid:ae45e55b-1374-4d81-9df5-3104cf5e2905","Detection and Segmentation of Colonic Polyps on Implicit Isosurfaces by Second Principal Curvature Flow","Van Wijk, C.; Van Ravesteijn, V.F.; Vos, F.M.; Van Vliet, L.J.","","2010","Today’s computer aided detection systems for computed tomography colonography (CTC) enable automated detection and segmentation of colorectal polyps.We present a paradigm shift by proposing a method that measures the amount of protrudedness of a candidate object in a scale adaptive fashion. One of the main results is that the performance of the candidate detection depends only on one parameter, the amount of protrusion. Additionally the method yields correct polyp segmentation without the need of an additional segmentation step. The supervised pattern recognition involves a clear distinction between size related features and features related to shape or intensity. A Mahalanobis transformation of the latter facilitates ranking of the objects using a logistic classifier. We evaluate two implementations of the method on 84 patients with a total of 57 polyps larger than or equal to 6 mm.We obtained a performance of 95% sensitivity at four false positives per scan for polyps larger than or equal to 6 mm.","biomedical image processing; image analysis; partial differential equation (PDE); polyp detection; surface evolution","en","journal article","IEEE","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:cdd86bc4-f102-4956-a22e-8f3782d4c349","http://resolver.tudelft.nl/uuid:cdd86bc4-f102-4956-a22e-8f3782d4c349","Fish-Eye Observing with Phased Array Radio Telescopes","Wijnholds, S.J.","Van der Veen, A.J. (promotor)","2010","The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.","array processing; phased arrays; calibration; imaging; radio telescopes","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","",""
"uuid:011bfe91-7ac8-4ef5-8e44-78c5d2bd63d5","http://resolver.tudelft.nl/uuid:011bfe91-7ac8-4ef5-8e44-78c5d2bd63d5","Automated Detection of Polyps for CT Colonography","Van Wijk, C.","Van Vliet, L.J. (promotor)","2010","In this thesis several image processing techniques are proposed for the detection of colorectal polyps in images obtained by CT colonography.","CT Colonography; Automated Detection; Medical Imaging; Medical Image Processing","en","doctoral thesis","","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:a7884a2a-4155-45ae-bb87-cf4d9ef5b6c0","http://resolver.tudelft.nl/uuid:a7884a2a-4155-45ae-bb87-cf4d9ef5b6c0","Adaptive image interrogation for PIV: Application to compressible flows and interfaces","Theunissen, R.","Scarano, F. (promotor)","2010","As an experimental tool, Particle Image Velocimetry has quickly superseded traditional point-wise measurements. The inherent image processing has become standardized though the performances are strongly dependent on user experience. Moreover, the arduously selected image interrogation parameters are applied uniformly throughout the image snapshots and image sequence but seldom comply with the observed fluid’s convective motion, spatial distribution in length scales or signal distribution. Instead, a degree of adaptation in the image analyses is required to estimate the velocity field underlying the image recordings as accurate as possible and preferably within an automated fashion. In this work, the aim has been a global solution which through adaptivity of the interrogation parameters (window size, eccentricity, orientation, location and overlap) remains adequate in the majority of encountered problems. This dissertation proposes to go in line of a recursive approach autonomously adapting to both signal and flow conditions. Correlation window location, number and size are regulated taking into account seeding quantity and flow fluctuation magnitude. Signal quantization is based on individual particle image segmentation while spatial variance in velocity served as a heuristic for flow adaptation. The new interrogation method surpasses the compromise between spatial resolution and robustness and places more and smaller windows where the flow requires it and seeding allows it. Vice versa, less of these unnecessary small windows are placed in regions where the flow does not require it (i.e. absence of gradients or fluctuations in velocity). A variant of the spatially adaptive interrogation method is proposed that refines window size, shape, orientation and spatial distribution based on the ensemble averaged velocity field and image properties. The use of ensemble averaged properties enables the reliable application of non-isotropic resolution in contrast to the instantaneous adaptive approach where the latter is impracticable. This approach additionally allows to reduce the number of interrogation windows without overly compromising the measurement spatial resolution where needed. To cope with typical problems of PIV near interfaces, an innovative interface treatment has been proposed incorporating wall adaptivity in an automated manner by gradually increasing the sampling rate in the vicinity of the wall, rotating the correlation windows parallel to the interface and reducing wall-normal window sizes. The enhanced performances of the adaptive interrogation approach have been extensively assessed and demonstrated on a large basis of experimental flow image recordings.","PIV; image processing; adaptive interrogation; spatial resolution; aircraft wake vortex; cylinder; shock-wave boundary layer interaction; interface treatment; Fast Fourier Tranforms; correlations; vector relocation; robustness; compression rqmp; boundary layer; statistical adaptivity; non-isotropic correlation; window overlap ratio; transonic airfoil wake; data analysis; backward facing step; hypersonic sphere; over-expanded supersonic jet; statistical error; integral time scale; confidence level; dependent circular block bootstrap","en","doctoral thesis","Von Karman Institute for Fluid Dynamics","","","","","","","","Aerospace Engineering","Aerodynamics","","","",""
"uuid:8dd0f37e-bbcc-472c-8326-b829a1517e13","http://resolver.tudelft.nl/uuid:8dd0f37e-bbcc-472c-8326-b829a1517e13","Formation and evolution of nearshore sandbars","Smit, M.W.J.","Stive, M.J.F. (promotor); Reniers, A.J.H.M. (promotor)","2010","The aim of this study is to understand whether hydrodynamic processes or geometrical characteristics play a dominant role in the response of the nearshore sandbar system to hydrodynamic conditions. To that end a depth-averaged (2DH) process-based model has been used to compute the morphological evolution of nearshore sandbars. The morphological evolution was computed for an initially alongshore uniform beach profile with two bars with an alongshore length of 7 km, forced with constant hydrodynamic conditions over a period of two weeks. The computations aimed to investigate the evolution of the system. It was found that an identical initial cross-shore profile responds distinctly different to different constant hydrodynamic conditions, showing the role of the hydrodynamic conditions. The length scales of the bars (corresponding to rip channel distances) increased with increasing alongshore velocities and increasing depths of the bar crests. The length scales ranged from 300-700 m for the inner and 600-2000 m for the outer bar. The response time of the system was in the order of days and depends linearly on the local wave height, the alongshore current, the steepness of the bar and inversely on the active volume of the bar. Bars with a smaller volume were found to respond quicker. To speed up the morphological computations, the initial alongshore uniform bathymetries were perturbed with a random seed in the order of cm. Different seedings resulted in different locations of the evolving features, while maintaining the length scales corresponding to the forcing condition. The role of the antecedent morphology was further investigated with computations with an increasing level of initial morphological variability. A high level of variability is for example formed by deep rip channels. With deeply imprinted bathymetrical patterns, the resulting hydrodynamical patterns prohibited the evolution of new patterns. This prohibited the adaptation toward length scales that would match the concurrent forcing conditions if the initial bathymetry would have been alongshore uniform. Only if the level of variability is small (smaller than O 0.5 m), the patterns adjusted toward the expected length scales. This was found for both evolutions with increasing as well as decreasing energy levels. This explains why observed nearshore bar patterns rarely match the concurrent conditions. The antecedent level of variability is often high, which inhibits complete adaptation. Further, the forcing conditions rarely persist for periods of time that are long enough for a system to evolve towards the corresponding length scales even if the initial variability would have been minimal. A hindcast was performed of an observed morphological evolution at Palm Beach, New South Wales, Australia, during a ten day period including a storm event. Palm Beach is a pocket beach of about 2 km length. During the event, the wave energy increased from moderate to storm levels, subsequently decreasing again to moderate conditions. The observed morphological variability changed from a single barred beach with rip channels toward a reset morphology (no alongshore variability) during the storm with subsequently newly evolving rip channels during the quieter post-storm conditions. The initial bathymetries used for the model computations were inferred from video-observations of the dissipation patterns. The effect of wave groups, wave asymmetry, long wave induced sediment stirring, the amount of turbulence and the rate of morphological change was tested in creating and hindcasting the observed patterns. It was found that these processes affect the magnitude and pace of morphological evolution. With optimal settings, the model including all mentioned processes forecasted a morphological evolution with decreasing variability during the storm event -similar to the observations. However, the observed amount of increase in bathymetrical variability after the storm event could not be matched in magnitude by the model. In general, best matches with observations were obtained for computations with a duration of up to three days. Within this period the different process settings only clearly changed the morphological evolution when the storm event was included within this period. Excluding wave groups resulted in the evolution of slightly shorter length scales. During the storm event an offshore bar formed and subsequent evolution was small and occurred both near the shore (wave groups have a diffusive effect in shallow water) and in deeper water. Computations starting after the storm event showed very little difference in morphological evolution whether wave groups were included or not. Excluding wave asymmetry resulted in shoreward migration of the shoreline and very little morphological evolution after all initial features had been erased. Long wave induced sediment stirring has a large diffusive effect on the evolving morphological features. Excluding this stirring resulted in the evolution of extreme shore-attached features. When the turbulent diffusion in the model was decreased, similar types of features evolved, though at slightly different rates and moments. Increasing the rate of morphological change resulted in the evolution of increased, mainly shore-attached, morphological variability. It was found that obtaining the correct pace and magnitude of morphological evolution is crucial for the level of success. If an event would encompass only an increasing or decreasing level of energy, it could be modelled. However, maintaining the correct pace and magnitude of evolution throughout a storm event has not been achieved with the currently tested model formulations and settings. This indicates that the model formulations need improvement. It is suggested that improving the description of the diffusion, including improving the turbulence description, can improve the model's capabilities. This does not only require improved model formulations, but also increased knowledge of the turbulence processes in the nearshore zone. The location of evolving features was found to be highly sensitive to the location and depth of imprint of features in the initial bathymetry. As this is rarely available to the required degree of accuracy, it is not expected that the exact location of features will be predicted correctly. However, the length scales and level of variability could be hindcasted if optimal settings are found, process descriptions improved (e.g. the diffusion) and if the model is morphologically calibrated both for evolutions with increasing and decreasing wave energy. In conclusion, morphological evolution of nearshore sandbar patterns is found to be influenced by the initial morphology in two ways. First, if the initial variability is low, local hydrodynamic forcings -determined by the off-shore conditions and the local geometry- and their duration will determine the length scale. The location of initially small perturbations -order of cm- influences the location of rip channels. Second, if the initial morphology has a high level of variability, the bathymetry will remain the same due to the occurring hydrodynamic circulations which are reinforced by the incoming waves. Only an event with extreme energy may cause changes in the morphology in this case. The hydrodynamics seem to have a rather small role in changing the patterns of a bathymetry in case there is a significant level of variability: they reinforce existing patterns and are only capable of drastically changing existing patterns when the energy level is extremely high. They do affect the length scales and the response time in case of small initial variability. Using a morphological model requires accurate calibration of both the hydrodynamics as well as of the morphodynamics. Different nearshore processes have different effects with various magnitudes at different locations in the nearshore zone. Obtaining the correct balance between processes which amplify or damp existing patterns throughout an event with varying energy levels is currently a challenge. However, hindcasting either an up-state or a down-state morphological evolution is possible.","nearshore sandbars; morphological model; rip channel; pattern; morphologial process","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:759b6dda-0d5e-43b7-87f6-23bda131a7f2","http://resolver.tudelft.nl/uuid:759b6dda-0d5e-43b7-87f6-23bda131a7f2","A Pure Object-Oriented Embedding of Attribute Grammars","Sloane, A.M.; Kats, L.C.L.; Visser, E.","","2009","This paper is a pre-print of: Anthony M. Sloane, Lennart C. L. Kats, Eelco Visser. A Pure Object-Oriented Embedding of Attribute Grammars. In T. Ekman and J. Vinju, editors, Proceedings of the Ninth Workshop on Language Descriptions, Tools, and Applications (LDTA’09), Electronic Notes in Theoretical Computer Science. York, United Kingdom, March 2009. Attribute grammars are a powerful specification paradigm for many language processing tasks, particularly semantic analysis of programming languages. Recent attribute grammar systems use dynamic scheduling algorithms to evaluate attributes by need. In this paper, we show how to remove the need for a generator, by embedding a dynamic approach in a modern, object-oriented programming language to implement a small, lightweight attribute grammar library. The Kiama attribution library has similar features to current generators, including cached, uncached, circular, higher-order and parameterised attributes, and implements new techniques for dynamic extension and variation of attribute equations. We use the Scala programming language because of its combination of object-oriented and functional features, support for domain-specific notations and emphasis on scalability. Unlike generators with specialised notation, Kiama attribute grammars use standard Scala notations such as pattern-matching functions for equations and mixins for composition. A performance analysis shows that our approach is practical for realistic language processing.","language processing; compilers; domain-specific languages","en","report","Delft University of Technology, Software Engineering Research Group","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Computer Technology","","","",""
"uuid:56a12527-d8a1-4e46-bbc9-4d1f0e9d2c54","http://resolver.tudelft.nl/uuid:56a12527-d8a1-4e46-bbc9-4d1f0e9d2c54","Correlated fractal percolation and the palis conjecture","Dekking, M.; Don, H.","","2009","Let F1 and F2 be independent copies of one-dimensional correlated fractal percolation, with almost sure Hausdorff dimensions dimH(F1) and dimH(F2). Consider the following question: does dimH(F1) + dimH(F2) > 1 imply that their algebraic difference F1 ? F2 will contain an interval? The well known Palis conjecture states that ‘generically’ this should be true. Recent work by Kuijvenhoven and the first author (Dekking and Kuijvenhoven in J. Eur. Math. Soc., to appear) on random Cantor sets cannot answer this question as their condition on the joint survival distributions of the generating process is not satisfied by correlated fractal percolation. We develop a new condition which permits us to solve the problem, and we prove that the condition of Dekking and Kuijvenhoven (J. Eur. Math. Soc., to appear) implies our condition. Independently of this we give a solution to the critical case, yielding that a strong version of the Palis conjecture holds for fractal percolation and correlated fractal percolation: the algebraic difference contains an interval almost surely if and only if the sum of the Hausdorff dimensions of the random Cantor sets exceeds one.","Palis conjecture; algebraic difference; cantor sets; correlated fractal; percolation; branching processes; criticality","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:ee432065-0bfe-467a-a100-8d8ce96659d7","http://resolver.tudelft.nl/uuid:ee432065-0bfe-467a-a100-8d8ce96659d7","Single Grain TFTs for High Speed Flexible Electronics","Baiano, A.","Beenakker, C.I.M. (promotor)","2009","SG-TFTs fabricated by the ?-Czochralski process have already reached a performance as high as that of SOI MOSFET devices. However, one of the most important and challenging goals is extending SG-TFT technology to reach a higher level of performance than that achieved with SOI technology. This thesis considers two different aspects of this question. Firstly, given the proven potential of the ?-Czochralski process to provide high-quality crystalline silicon, it is also of interest to investigate whether the ?-Czochralski process could also be used to produce high-mobility semiconductor materials such as germanium (Ge) sputtered at low temperature as a medium for future thin-film transistor applications, since Ge is considered to be a potential replacement for silicon (Si) because of its much higher carrier mobility. Secondly, it is also worth while investigating whether the field-effect mobilities of n- and p-channel single-grain Si TFTs could be enhanced compared with to the most advanced strained-Si on SiGe MOSFET technology by applying strain with excimer laser crystallization, despite the low process temperature used. The study of degradation phenomena in SG-TFTs under bias stress is also of fundamental importance for the reliability analysis of such devices. A method for degradation analysis of SG-TFTs under bias stress for 2D modeling by a TCAD simulator has therefore been developed as part of the present study. Such modeling aims to improve our understanding of high voltage applications. A prototype E-Paper with active-matrix quick-response liquid powder display has been designed and developed with the aid of SG-TFT technology on this basis. The main issue in the development of such E-Paper is the requirement for a 70 V supply voltage. The necessary SG-TFT produced by the ?-Czochralski process must therefore be designed to operate at such a high voltage, and its fabrication process must be compatible with the ?-Czochralski process used to make standard SG-TFTs for the development of a fully integrated E-Paper with display and driver circuits. No application using of SG-TFTs fabricated by the ?-Czochralski process would be possible without an accurate compact SPICE model of the intended device. Many SPICE models are commercially available nowadays for both MOSFET and Poly-Si TFT technologies. However, none of those is suitable for SG-TFTs. An accurate SPICE model of SG-TFT circuits designed for digital, analog and RF applications has been developed as part of the present study. In particular, a unified SPICE model has been obtained that is applicable both to SG-TFTs fabricated by crystallization at low laser energy (which have poly-Si-like performance) and to TFTs made by crystallization at high laser energy (which have SOI-like performance).","single-grain silicon thin-film transistors (TFTs); ?-Czoshralski process; DC-RF SPICE model; electronic paper; high-voltage single-grain silicon TFTs; location-controlled germanium grains; single-grain germanium TFTs; tensile strain silicon; TFTs reliability","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","",""
"uuid:829e0d8a-fa6d-4b09-8ba5-793da681e725","http://resolver.tudelft.nl/uuid:829e0d8a-fa6d-4b09-8ba5-793da681e725","Riverbed sediment classification using multi-beam echo-sounder backscatter data","Amiri-Simkooei, A.; Snellen, M.; Simons, D.G.","","2009","A method has recently been developed that employs multi-beam echo-sounder backscatter data to both obtain the number of sediment classes and discriminate between them by applying the Bayes decision rule to multiple hypotheses [ Simons and Snellen, Appl. Acoust. 70, 1258–1268 (2009) ]. In deep water, the number of scatter pixels within the beam footprint is large enough to ensure Gaussian distributions for the backscatter strengths and to increase the discriminative power between acoustic classes. In very shallow water (<10?m), however, this number is too small. This paper presents an extension of this high-frequency methodology for these environments, together with a demonstration of its performance using backscatter data from the river Waal, The Netherlands. The objective of this work is threefold. (i) Increasing the discriminating power of the classification method: high-resolution bathymetry data allow precise bottom slope corrections for obtaining the true incident angle, and the high-resolution backscatter data reduce the statistical fluctuations via an averaging procedure. (ii) Performing a correlation analysis: the dependence of acoustic backscatter classification on sediment physical properties is verified by observing a significant correlation of 0.75 (and a disattenuated correlation of 0.90) between the classification results and sediment mean grain size. (iii) Enhancing the statistical description of the backscatter intensities: angular evolution of the K-distribution shape parameter indicates that the riverbed is a rough surface, in agreement with the results of the core analysis.","acoustic signal processing; backscatter; Bayes methods; correlation methods; fluctuations; Gaussian distribution; rivers; sediments; signal classification","en","journal article","Acoustical Society of America","","","","","","","","Aerospace Engineering","Remote Sensing","","","",""
"uuid:6d1689f3-7b1a-4355-81b0-af3d19fae469","http://resolver.tudelft.nl/uuid:6d1689f3-7b1a-4355-81b0-af3d19fae469","Carbon Dioxide Capture from Flue Gas: Development and Evaluation of Existing and Novel Process Concepts","Abu Zahra, M.R.M.","Jansens, P.J. (promotor)","2009","One of the main global challenges in the years to come is to reduce the CO2 emissions in view of the apparent contribution to global warming. Carbon dioxide capture, transport, and storage (CCS) from fossil fuel fired power plants is drawing increased interest as an intermediate solution towards sustainable energy systems in the long term. However, CCS is still facing some challenges, such as large scale implementation requires high energy demands and leads to high cost. Innovation and optimization of the capture process is needed to reduce the energy requirement and to minimize the investment cost in order to make CCS viable for application in the near future. The CO2 post-combustion capture based on the absorption/desorption process with monoethanolamine (MEA) solutions, is considered as the state-of-art technology. In this thesis, the MEA process has been defined as the reference case for the purposes of comparison and benchmarking. By analysing the MEA reference case, it can be concluded that this is an energy intensive process due to the regeneration energy of the MEA solution (4 GJ/tonne CO2). For this conventional process, major energy savings can be realized by optimizing the lean solvent loading, the amine solvent concentration, as well as the stripper operating pressure. A minimum thermal energy requirement of 3.0 GJ/tonne CO2 can be obtained using a 40 wt. % MEA solution and a stripper operating pressure of 210 kPa. Significant energy and cost savings can be achieved by increasing the MEA concentration in the absorption solution. It is, however, still to be investigated if high MEA concentrations can be used due to possible corrosion and solvent degradation. Increasing the temperature (operating pressure) in the stripper will lead to a higher efficiency of the regeneration and will reduce thermal energy requirement. Moreover, a high operating pressure will reduce the cost and the energy needed for CO2 compression. The economic baseline for CO2 post-combustion capture using MEA is defined using 600 MWe coal fired power plant as a reference case and assuming 2005 as the reference year, 8% discount factor and 25 years as a project life. The process modelling results are used for providing the required input to the economic modelling. The economic evaluation for the MEA conventional process has shown that this process will lead to a cost of ~40 /tonne CO2 avoided. Using the baseline techno-economic evaluation as a starting point, a parameter study for the conventional CO2 post-combustion capture process is performed. The main operating variables considered in this study were the MEA solvent concentration, the CO2 removal percentage, the solvent lean loading, and the stripper operating pressure. The economic results show a minimum CO2 avoided cost of 33 /tonne CO2 with an optimized process conditions of 0.3 mol CO2/mol MEA lean solvent loading, using a 40 wt. % MEA solution and a stripper operating pressure of 210 kPa. This translates to 53 /MWh cost of electricity, compared to 31 /MWh for the power plant without capture. The difference in costs per tonne CO2 avoided is small for CO2 removal in the range between 80% and 95%. The CO2 post-combustion capture process overall performance is evaluated using pilot plant experimental results. Two different modelling approaches (equilibrium-stage and rate-based) are validated and compared using these large-scale pilot plant data. Equilibrium-stage and rate-based models are implemented using the commercial Aspen plus simulation tool. The study indicates that there are no major differences between the two modelling approaches in predicting the overall capture process behaviour for this pilot plant case (e.g. regeneration energy requirement, CO2 removal % and solvent rich loading). Hence an equilibrium-stage model was preferred as the basis for over-all process modelling and benchmarking different capture solvents in view of its lesser complexity. The rate-based model, however, did yield more accurate predictions of the temperature profiles and mass transfer inside the columns. As a result, for a detailed process design or understanding of the mass and energy profiles in the absorber and stripper columns, the rate-based approach should be applied. The Hypogen concept (electricity generation with co-production of hydrogen) is considered one of the future energy options. This option will facilitate the use of a clean source of energy (hydrogen) for purposes like transportation and heating. This concept is based on the use of syngas for power production with CO2 post-combustion capture incorporating the possibility of co-production of hydrogen (5-10% of the total syngas). In this concept, hydrogen is produced and purified in two different methods. The first method is based on increasing hydrogen content using the water gas shift reaction, followed by the separation of hydrogen from CO2 using a high-pressure absorber. This absorber column is integrated with the ambient post-combustion capture process. The second method is based on the separation of hydrogen from syngas using polymeric membranes. In both options, the hydrogen will be further purified using a pressure swing adsorption system. Both options are feasible with an overall CO2 capture cost comparable to the conventional post-combustion capture process. However, there are some limitations in the hydrogen purity using polymeric membranes. The advantage of the high-pressure absorber is more obvious if an advanced solvent, like the sterically hindered 2-amino-2-methyl-1-propanol (AMP), is used instead of a conventional solvent like MEA. Increasing the CO2 content in the flue gas is investigated by recycling the flue gas over the gas turbine. The flue gas recycle is beneficial for the overall capture process behaviour. The total flue gas flow rate is reduced with increasing flue gas recycle ratio. This reduction in the flue gas flow rate results in a smaller absorber column. The capital investment, the cost of electricity and cost of CO2 avoided are reduced with increasing the flue gas recycle ratio. There is a marginal effect of the flue gas recycle on the solvent regeneration energy using the conventional MEA solvent. This is due to the limitation in MEA solvent capacity. Moreover, the effect of the flue gas recycle on the energy requirement and the overall cost is more significant using a different solvent with higher loading capacity (e.g. AMP). As has been observed out of the MEA conventional process analysis, the desorption energy requirement is a significant burden for large-scale applications. To overcome the high-energy demand and to increase the operational flexibility, a new process concept is investigated. This process concept is based on dividing the CO2 capture process into a bulk removal step and a deep removal step using two different solvent/systems. This two-step concept is evaluated for two different cases. Both cases are based on the use of MEA in the first step. In the second step, either AMP solution or coal/activated carbon is used for the removal of the remaining CO2. The results show that the removal of CO2 using coal or activated carbon is not advantageous due to the large quantity of coal/activated carbon needed. On the hand, the use of the two-chemical solvent has shown potential for possible process improvement. The overall energy requirements for the two-solvent concept can be reduced by 16 % as compared to the MEA reference case. Due to the higher capital costs, the overall cost of carbon dioxide avoided in the 2-step concept increases by 13 %. Still, increasing the capture process flexibility can be an advantage of the 2-step concept. This flexibility allows the application of different operating conditions and/or process systems in the different absorption-desorption units. One of the benefits can be the use of waste heat for regeneration, by operating one of the desorbers at lower temperature. From the analysis of the post-combustion capture process that has been done in this thesis, it is evident that to achieve significant reduction of the capture process cost, multiple process parameters need to be improved. For future development of the CO2 post-combustion capture process, it would be beneficial to direct the solvent development research towards solvents systems, which have lower reaction enthalpy and higher capacity. A significant improvement can be obtained by the development of solvent systems where the solvent is regenerated at higher pressure. In addition, smart process improvement and integration are required to achieve a reasonable cost reduction. Flue gas recycle over the gas turbine can contribute by reducing the overall capital investment. Splitting the capture process and/or combining it with co-production of hydrogen can be an extra economic parameter in the overall process optimization. It can be expected that by improving the process design and the solvent, implementation of post combustion capture on larger scale will be possible in the near future.","CO2 Capture; Post combustion capture; Process development","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","",""
"uuid:64c0929b-78fa-47a8-912a-f833cc95ae04","http://resolver.tudelft.nl/uuid:64c0929b-78fa-47a8-912a-f833cc95ae04","Safety Assurance Process for FRMS: EJcase Implementation","Stewart, S.; Koornneef, F.; Akselsson, R.; Barton, P.","","2009","Chapter 6: Safety Assurance Process for FRMS - eJcase Implementation The European Commission HILAS project (Human Integration into the Lifecycle of Aviation Systems - a project supported by the European Commission’s 6th Framework between 2005-2009) was focused on using human factors knowledge and methodology to address key challenges for aviation (current and future) including a performance based approach for safety and fatigue management in the aviation sector, mainly inflight operations and maintenance. The project Deliverables have been presented as a series of draft book chapters on Safety Management Systems with emphasis on Fatigue risk Management and organisational learning from operational experiences in aviation. The chapters also include conceptual frameworks underlying a thorough analysis of essential functions, contents and structures of a Safety Management System (SMS). This includes supporting functionalities such as investigation in just culture context, decision making processes, and safety promotion. This set of eight (8) draft chapters links theory with field implementation and regulation in airlines. The draft chapters are: Ch. 1: Organisational Learning and Organisational Memory for SMS and FRMS Ch. 2: Resilience Safety Culture in Aviation Organisations Ch. 3: Operational Risk Management System for SMS and FRMS Ch. 4: Incident Investigation in SMS and FRMS Ch. 5: Developing a Safety Management System for Fatigue Related Risks in easyJet Ch. 6: Safety Assurance Process for FRMS – eJcase Implementation Ch. 7: Developing a Resilient Just Culture in SMS and FRMS – eJcase Implementation Ch. 8: International Fatigue Risk Management Forum - Safety Promotion and Feedback in FRMS","Aviation; SMS; atigue Safety Assurance; Rostering; Regulation; Principles; Organisational Learning; Rostering Evaluation Group; Review Process; Implementation; Strategic FRMS","en","book chapter","","","","","","","","","Technology, Policy and Management","Safety Science Group","","","",""
"uuid:c339a6a0-959d-4c01-9e1c-b2196e4ace6d","http://resolver.tudelft.nl/uuid:c339a6a0-959d-4c01-9e1c-b2196e4ace6d","The Quality of Lagged Products and Autoregressive Yule–Walker Models as Autocorrelation Estimates","Broersen, P.M.T.","","2009","The sample autocorrelation function is defined by the mean lagged products (LPs) of random observations. It is the inverse Fourier transform of the raw periodogram. Both contain the same information, and the quality of the full-length sample autocorrelation to represent random data is as poor as that of a raw periodogram. The autoregressive (AR) Yule-Walker method uses LP autocorrelation estimates to compute AR parameters as a parametric model for the autocorrelation. The order of the AR model can be taken as the full LP length, or it can be determined with an order selection criterion. However, the autocorrelation function can more accurately be estimated with a general parametric time-series method. This parametric estimate of the autocorrelation function always has better accuracy than the LP estimates. The LP autocorrelation function is as long as the observation window, but parametric estimates will eventually die out. They allow an objective answer to the question of how long the autocorrelation function really is.","autoregressive (AR) process; correlation; identification; order selection; spectral estimation; time-series model","en","journal article","IEEE","","","","","","","","Applied Sciences","Multi-Scale Physics","","","",""
"uuid:21c6ebf2-313a-4737-a8da-434571bc76c2","http://resolver.tudelft.nl/uuid:21c6ebf2-313a-4737-a8da-434571bc76c2","Pose estimation for mobile devices and augmented reality","Caarls, J.","Van Vliet, L.J. (promotor); Jonker, P.P. (promotor)","2009","In this thesis we introduce the reader to the field of Augmented Reality (AR) and describe aspects of an AR system. We show the current uses in treatment of phobias, games, sports and industry. We present the challenges for Optical See-Through Augmented Reality in which the real world is perceived normally by the user and is augmented with virtual objects by means of two displays and two half-translucent mirrors. Since the user does not perceive the world through camera images, as in Video See-Through Augmented Reality, the requirements for accurate alignment between the real and virtual worlds are more strict. Based on the design requirements for optical see through augmented reality, a systemarchitecture for the full AR system is proposed. A pose (position and orientation) estimation architecture is introduced, which separates an application that needs an estimate of a pose, from the sensors that provide partial measurements for this pose. It is a modular architecture in which modules can publish “magazines” to which other modules can subscribe. A magazine is a data stream of which issues can be read concurrently by multiple subscribers. The read-out rate may be lower than the publishing frequency. Each issue of a magazine is a time stamped data package from a stream, such as an image or measurement. The core of the work addresses the largest challenge in optical see-through AR: real-time pose estimation of the user’s eyes by fusing information from various sensors. Image processing techniques and sensor data fusion filters were developed to provide the most accurate estimation of the pose of a user’s head. The system is general enough to be used in other less demanding applications that need an estimate of a pose, such as free roaming automated vehicles in industrial settings. We explored image processing techniques for determining the pose of the camera from a single image of a marker. A marker is presented that minimizes the impact on the environment. Starting from well-known methods to detect edges and corners we developed our own corner detector that is accurate, precise and robust to noise. We presented a method to estimate the camera’s pose from four corners, and evaluated the accuracy in practical experiments. A Kalman filter is constructed and presented in detail that optimally combines the data from various sensors with different update rates, delays and accuracies. We also propose a pluggable Kalman filter set-up that enables sensors to be added and removed easily without changing the central filter that communicates with the application. This facilitates the separation between the sensor modules, the central filter and the application. A prototype AR system was built and evaluated. We present the practical aspect of integrating the sensors and pose estimation methods into a working augmented reality system. Using a SCARA robot to move our set-up, we determined practical accuracies for our system. We showed that one small marker is in general not enough for a full immersive augmented reality experience. We propose some solutions to increase the accuracy of the system and finally we show how we made convincing Augmented Reality demonstrations in our standing cooperation with the AR-lab of the Royal Academy of Arts in The Hague.","Image Processing; Augmented Reality; Pose estimation","en","doctoral thesis","","","","","","","","","Applied Sciences","Imaging Science & Technology","","","",""
"uuid:84703eac-a050-4e85-bf32-68bbae218732","http://resolver.tudelft.nl/uuid:84703eac-a050-4e85-bf32-68bbae218732","Reflection images from ambient seismic noise","Draganov, D.S.; Campman, X.; Thorbecke, J.W.; Verdel, A.; Wapenaar, C.P.A.","","2009","One application of seismic interferometry is to retrieve the impulse response (Green's function) from crosscorrelation of ambient seismic noise. Various researchers show results for retrieving the surface-wave part of the Green's function. However, reflection retrieval has proven more challenging. We crosscorrelate ambient seismic noise, recorded along eight parallel lines in the Sirte basin east of Ajdabeya, Libya, to obtain shot gathers that contain reflections. We take advantage of geophone groups to suppress part of the undesired surface-wave noise and apply frequency-wavenumber filtering before crosscorrelation to suppress surface waves further. After comparing the retrieved results with data from an active seismic exploration survey along the same lines, we use the retrieved reflection data to obtain a migrated reflection image of the subsurface.","geophysical signal processing; interference suppression; seismic waves; seismology; signal denoising","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:b71c6656-d761-4fb4-b079-5c8f363383e9","http://resolver.tudelft.nl/uuid:b71c6656-d761-4fb4-b079-5c8f363383e9","Ray-based stochastic inversion of prestack seismic data for improved reservoir characterization","Van der Burg, D.; Verdel, A.; Wapenaar, C.P.A.","","2009","Trace inversion for reservoir parameters is affected by angle averaging of seismic data and wavelet distortion on the migration image. In an alternative approach to stochastic trace inversion, the data are inverted prestack before migration using 3D dynamic ray tracing. This choice makes it possible to interweave trace inversion with Kirchhoff migration. The new method, called ray-based stochastic inversion, is a generalization of current amplitude versus offset/amplitude versus angle (AVO/AVA) inversion techniques. The new method outperforms standard stochastic inversion techniques in cases of reservoir parameter estimation in a structurally complex subsurface with substantial lateral velocity variations and significant reflector dips. A simplification of the method inverts the normal-incidence response from reservoirs with approximately planar layering at the subsurface target locations selected for inversion. It operates along raypaths perpendicular to the reflectors, the direction that offers optimal resolution to discern layering in a reservoir. In a test on field data from the Gulf of Mexico, reservoir parameter estimates obtained with the simplified method, the estimates found by conventional stochastic inversion, and the actual values at a well drilled after the inversion are compared. Although the new method uses only 2% of the prestack data, the result indicates it improves accuracy on the dipping part of the reservoir, where conventional stochastic inversion suffers from wavelet stretch caused by migration.","geophysical techniques; hydrocarbon reservoirs; seismic waves; seismology; stochastic processes","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:17bd6c50-a2ef-4a05-a778-b04219244a11","http://resolver.tudelft.nl/uuid:17bd6c50-a2ef-4a05-a778-b04219244a11","Developing a Resilient Just Culture in SMS and FRMS: EasyJet Implementation","Stewart, S.; Akselsson, R.; Koornneef, F.","","2009","Chapter 7: Developing a Resilient Just Culture in SMS and FRMS – easyJet Implementation The European Commission HILAS project (Human Integration into the Lifecycle of Aviation Systems - a project supported by the European Commission’s 6th Framework between 2005-2009) was focused on using human factors knowledge and methodology to address key challenges for aviation (current and future) including a performance based approach for safety and fatigue management in the aviation sector, mainly inflight operations and maintenance. The project Deliverables have been presented as a series of draft book chapters on Safety Management Systems with emphasis on Fatigue risk Management and organisational learning from operational experiences in aviation. The chapters also include conceptual frameworks underlying a thorough analysis of essential functions, contents and structures of a Safety Management System (SMS). This includes supporting functionalities such as investigation in just culture context, decision making processes, and safety promotion. This set of eight (8) draft chapters links theory with field implementation and regulation in airlines. The draft chapters are: Ch. 1: Organisational Learning and Organisational Memory for SMS and FRMS Ch. 2: Resilience Safety Culture in Aviation Organisations Ch. 3: Operational Risk Management System for SMS and FRMS Ch. 4: Incident Investigation in SMS and FRMS Ch. 5: Developing a Safety Management System for Fatigue Related Risks in easyJet Ch. 6: Safety Assurance Process for FRMS – eJcase Implementation Ch. 7: Developing a Resilient Just Culture in SMS and FRMS – easyJet Implementation Ch. 8: International Fatigue Risk Management Forum - Safety Promotion and Feedback in FRMS","Aviation; Safety Management System; Resilient Safety Culture; Just Culture Management Process; Investigation Process; Culpability; Role of Investigators; Human Factors Management; Implementation","en","book chapter","","","","","","","","","Technology, Policy and Management","Safety Science Group","","","",""
"uuid:8bed11e7-8e84-4e67-a5e8-d455ee9e061b","http://resolver.tudelft.nl/uuid:8bed11e7-8e84-4e67-a5e8-d455ee9e061b","Incident Investigation in SMS and FRMS","Stewart, S.; Koornneef, F.; Akselsson, R.; Kingston, J.; Stewart, D.","","2009","Chapter 4: Incident Investigation in SMS and FRMS The European Commission HILAS project (Human Integration into the Lifecycle of Aviation Systems - a project supported by the European Commission’s 6th Framework between 2005-2009) was focused on using human factors knowledge and methodology to address key challenges for aviation (current and future) including a performance based approach for safety and fatigue management in the aviation sector, mainly inflight operations and maintenance. The project Deliverables have been presented as a series of draft book chapters on Safety Management Systems with emphasis on Fatigue risk Management and organisational learning from operational experiences in aviation. The chapters also include conceptual frameworks underlying a thorough analysis of essential functions, contents and structures of a Safety Management System (SMS). This includes supporting functionalities such as investigation in just culture context, decision making processes, and safety promotion. This set of eight (8) draft chapters links theory with field implementation and regulation in airlines. The draft chapters are: Ch. 1: Organisational Learning and Organisational Memory for SMS and FRMS Ch. 2: Resilience Safety Culture in Aviation Organisations Ch. 3: Operational Risk Management System for SMS and FRMS Ch. 4: Incident Investigation in SMS and FRMS Ch. 5: Developing a Safety Management System for Fatigue Related Risks in easyJet Ch. 6: Safety Assurance Process for FRMS – easyJet case study Implementation Ch. 7: Developing a Resilient Just Culture in SMS and FRMS – easyJet case study Implementation Ch. 8: International Fatigue Risk Management Forum - Safety Promotion and Feedback in FRMS","Aviation; Investigation Process; Risk Management System; Analytical Tools; Organisational Learning and Memory; Safety Management System","en","book chapter","","","","","","","","","Technology, Policy and Management","Safety Science Group","","","",""
"uuid:8dd73b3c-2987-4981-9a17-7f9fc6f44b99","http://resolver.tudelft.nl/uuid:8dd73b3c-2987-4981-9a17-7f9fc6f44b99","The study of bronze statuettes with the help of neutron-imaging techniques","Van Langh, R.; Lehmann, E.; Hartmann, S.; Kaestner, A.; Scholten, F.","","2009","Until recently fabrication techniques of Renaissance bronzes have been studied only with the naked eye, microscopically, videoscopically and with X-radiography. These techniques provide information on production techniques, yet much important detail remains unclear. As part of an interdisciplinary study of Renaissance bronzes undertaken by the Rijksmuseum Amsterdam, neutron-imaging techniques have been applied with the aim of obtaining a better understanding of bronze workmanship during the Renaissance period. Therefore, an explanation of the fabrication techniques is given to better understand the data collected by these neutron-imaging techniques. The data was used for tomography studies, which reveal hidden aspects that could not at all or scarcely be seen using X-radiography. For this specific study, the representative bronze ‘Hercules Pomarius’ of Willem van Tetrode (ca 1520–1588) has been examined, along with 20 other Renaissance bronzes from the Rijksmuseum collection.","Archaeometry; Neutron tomography; Radiography; Non-destructive testing; Fabrication process; Renaissance bronze","en","journal article","Springer","","","","","","","","Mechanical, Maritime and Materials Engineering","Materials Science and Engineering","","","",""
"uuid:58d3ba3f-5514-4e9c-b88b-adbe7812857f","http://resolver.tudelft.nl/uuid:58d3ba3f-5514-4e9c-b88b-adbe7812857f","Toward a general model of portfolio decision making","Kester, L.; Griffin, A.; Hultink, E.J.; Lauche, K.","","2009","We develop a general model of how new product development portfolio decisions are made based on four diverse case studies. Previous research has investigated portfolio decisions as individually discrete decisions. We find that portfolio decision-making has to be considered as an integrated system of domain-based processes that produce evidence-, opinion- and power-based informational inputs. The data further suggest that these processes are influenced by the level of trust, collective ambition, and leadership style. The ultimate objective of a firm is to achieve a portfolio mindset to focus effort on the right projects, and to be agile in their decision-making capabilities.","portfolio management; decision processes","en","conference paper","Academy of Management","","","","","","","","Industrial Design Engineering","Product Innovation Management","","","",""
"uuid:f9398a35-1506-4c73-9492-48284f71d85f","http://resolver.tudelft.nl/uuid:f9398a35-1506-4c73-9492-48284f71d85f","Analyzing opportunities for using interactive augmented prototyping in design practice","Verlinden, J.C.; Horvath, I.","","2009","The use of tangible objects is paramount in industrial design. Throughout the design process physical prototypes are used to enable exploration, simulation, communication, and specification of designs. Although much is known about prototyping skills and technologies, the reasons why and how such models are employed in design practice are poorly understood. Advanced techniques and design media such as virtual and augmented prototyping are being introduced without insight as to their benefits.We believe that an augmented prototyping system, that is, employing augmented reality technology to combine physical and digital representations, could positively influence the design process. However, we lack knowledge on why and howitmight facilitate design. This paper reports on case studies performed in different domains of industrial design. At each of three Dutch design offices, a project was followed with particular attention to physical prototyping and group activities. The projects encompassed information appliance design, automotive design, and interior design. Although the studies vary in many aspects (product domain, stakeholders, duration), the findings can be applied in conceptualizing advanced prototyping systems to support industrial design. Furthermore, the data reveal that the roles of a prototype in current practice are not necessarily utilitarian; for example, the prototype may serve as a conversation piece or as seducer. Based on so-called “hints,” bottlenecks and best practices concerning concept articulation are linked to usage scenarios for augmented tangible prototyping. The results point to modeling and communication scenarios. Detailed study of the cases indicates that communication activities, especially design reviews, would benefit most from interactive augmented prototyping.","augmented reality; case study; design process; prototyping; tangible user interfaces","en","journal article","Cambridge University Press","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:a8d872ac-6641-47e9-8c5f-653b552c01c5","http://resolver.tudelft.nl/uuid:a8d872ac-6641-47e9-8c5f-653b552c01c5","Communicating model insights using interactive learning environments","Slinger, J.H.; Yucel, G.; Pruyt, E.","","2009","Much attention is focused on the rational and advisory style of developing and applying System Dynamics models. Even group model building focuses primarily on the formulation and understanding of the model by the group members themselves. There is a dearth of attention for communication of the insights derived during the model building process to those peripherally or (un)involved in this process. In this study, the multi-actor context of model implementation is addressed explicitly. The feedback loop connecting model-derived insights and results back to the problem owners, the client and stakeholders, is explored. A number of principles for use in the communication of models are derived and the rôle of interactive learning environments as a tool in communicating model insights in such a multi-actor context is discussed.","modeling process; multi-actor context; communication principles; learning; multiple stakeholder environments; interactive user interfaces","en","conference paper","System Dynamics Society","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:14eeb991-c4fc-4959-b8d9-e15371864dc6","http://resolver.tudelft.nl/uuid:14eeb991-c4fc-4959-b8d9-e15371864dc6","Stochastic joint inversion of 2D seismic and seismoelectric signals in linear poroelastic materials: A numerical investigation","Jardani, A.; Revil, A.; Slob, E.C.; Söllner, W.","","2009","The interpretation of seismoelectrical signals is a difficult task because coseismic and seismoelectric converted signals are recorded simultaneously and the seismoelectric conversions are typically several orders of magnitude smaller than the coseismic electrical signals. The seismic and seismoelectric signals are modeled using a finite-element code with perfectly matched layer boundary conditions assuming a linear poroelastic body. We present a stochastic joint inversion of the seismic and seismoelectrical data based on the adaptive Metropolis algorithm, to obtain the posterior probability density functions of the material properties of each geologic unit. This includes the permeability, porosity, electrical conductivity, bulk modulus of the dry porous frame, bulk modulus of the fluid, bulk modulus of the solid phase, and shear modulus of the formations. A test of this approach is performed with a synthetic model comprising two horizontal layers and a reservoir partially saturated with oil, which is embedded in the second layer. The result of the joint inversion shows that we can invert the permeability of the reservoir and its mechanical properties.","elastic moduli; finite element analysis; geophysical prospecting; geophysical signal processing; hydrocarbon reservoirs; permeability; porosity; seismology; terrestrial electricity","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:9c83dd21-50f6-4f7d-8470-a812726aa7f2","http://resolver.tudelft.nl/uuid:9c83dd21-50f6-4f7d-8470-a812726aa7f2","report of the Schelde Pilot Study","Marchand, M.","TU Braunschweig","2009","The aim of this report is to document the output of the three workshops and the questionnaire. The results of the Flood risk analysis has been reported in another Floodsite report: De Bruijn et al., 2008. For a complete description of the Schelde pilot study reference is made to Chapter 8 of the FLOODsite book on pilot sites (Schanze, in prep.).","risk perception; Flood risk; Flood risk management; Flood risk management; strategy; process; planning; risk perception; Stakeholder involvement","en","report","Deltares","","","","","","","","","","","","Floodsite",""
"uuid:8bce1148-57a6-488a-b621-0d3af14c5374","http://resolver.tudelft.nl/uuid:8bce1148-57a6-488a-b621-0d3af14c5374","The concept of double blending: Combining incoherent shooting with incoherent sensing","Berkhout, A.J.; Blacquiere, G.; Verschuur, D.J.","","2009","Seismic surveys are designed so that the time interval between shots is sufficiently large to avoid temporal overlap between records. To economize on survey time, the current compromise is to keep the number of shots to an acceptable minimum. The result is a poorly sampled source domain. We propose to abandon the condition of nonoverlapping shot records to allow densely sampled, wide-azimuth source distributions (source blending). The rationale is that interpolation is much harder than separation. Source blending has significant implications for quality (source density) and economics (survey time). In addition to source blending, detector blending is introduced by which every channel records a superposition of detected signals, each with its own particular code. With detector blending, many more detectors can be used for the same number of recording channels. This is particularly beneficial when the number of detectors is very large (mass sensoring) or the number of channels is limited (wireless recording). The concept of double blending is defined as the case in which both source blending and detector blending are applied. Double blending allows a significant trace-compression factor during acquisition.","data acquisition; geophysical signal processing; geophysical techniques; seismic waves","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:63e3697e-b5b0-47a8-9088-83285d1a4c7b","http://resolver.tudelft.nl/uuid:63e3697e-b5b0-47a8-9088-83285d1a4c7b","Futures for the flood risk system of the Elbe River","Schanze, J.","TU Braunschweig","2009","The report aims at describing the results of testing a methodology for composing, analysing and evaluating long-term futures of flood risk systems mainly considering scenarios of autonomous change as well as strategic alternatives for targeted flood risk reduction.","flood risk analysis; Elbe; Flood risk; Flood risk management; Flood risk management; strategy; process; planning; long-term strategies; Planning","en","report","TU Dresden","","","","","","","","","","","","Floodsite",""
"uuid:d767bc14-01f6-4870-b4ab-c8b1c9b48605","http://resolver.tudelft.nl/uuid:d767bc14-01f6-4870-b4ab-c8b1c9b48605","Methodology for a DSS to support long-term Flood Risk Management Planning","McGahey, C.","TU Braunschweig","2009","This report is Deliverable 18-2 which describes the conceptual, methodological and technological frameworks and how these are implemented for three pilot sites - the Thames, Schelde and Elbe - through prototype decision or discussion support tools. The report describes the generic interactions between all relevant factors that drive and influence flood risk management in the long term and how these may be enacted within the three prototype DSS tools.","Decision support; Flood risk; Flood Risk Assessment; Flood risk management; Flood risk management; strategy; process; planning","en","report","HR Wallingford","","","","","","","","","","","","Floodsite",""
"uuid:d1bf5e81-c779-47c8-ad07-35b6b17a811e","http://resolver.tudelft.nl/uuid:d1bf5e81-c779-47c8-ad07-35b6b17a811e","Constrained Registration of the Wrist Joint","Van de Giessen, M.; Streekstra, G.J.; Strackee, S.D.; Maas, M.; Grimbergen, K.A.; Van Vliet, L.J.; Vos, F.M.","","2009","Comparing wrist shapes of different individuals requires alignment of these wrists into the same pose. Unconstrained registration of the carpal bones results in anatomically nonfeasible wrists. In this paper, we propose to constrain the registration using the shapes of adjacent bones, by keeping the width of the gap between adjacent bones constant. The registration is formulated as an optimization involving two terms. One term aligns the wrist bones by minimizing the distances between corresponding bone surfaces. The second term constrains the registration by minimizing the distances between adjacent sliding surfaces. The registration is based on the Iterative Closest Point algorithm. All bones are registered concurrently so that no bias is introduced towards any of the bones. The proposed registration method delivers anatomically correct configurations of the bones. The registration errors are in the order of the voxel size of the acquired CT data (0.3x0.3x0.3 mm³). The standard deviation in the widths of gaps between adjacent bones is in the order of 10% with an insignificant bias. This is a large improvement over the standard deviations of 30%–80% encountered in unconstrained registration. The value of this method is its capability of accurately registering joints in varying poses resulting in physiological joint configurations.","articulated registration; constrained optimization; image processing; intersubject registration; surface registration; wrist","en","journal article","IEEE","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:d0054033-62cb-4ee3-8f4d-16b6b4f6f20c","http://resolver.tudelft.nl/uuid:d0054033-62cb-4ee3-8f4d-16b6b4f6f20c","Simulating the use of products: Applying the nucleus paradigm to resource-integrated virtual interaction models","Van der Vegte, W.F.; Horváth, I.; Rusák, Z.","","2009","We introduce a methodology for modelling and simulating fully virtual human-artefact systems, aiming to resolve two issues in virtual prototyping: (i) integration of distinct modelling and simulation approaches, and (ii) extending the deployability of simulations towards conceptual design. We are going to offer designers a new way of investigating the use of a product, by integrating scenarios of expected human-artefact interaction and simulations of artefact behaviour into a unified framework. Since recruitment and employment of human subjects for physical and virtual testing is problematic, we propose a fully virtual simulation method based on resource-integrated models. The models incorporate both the logical and the physical aspects of the behaviours of humans and artefacts. This paper elaborates on a pilot implementation, in particular on realizing the implementation of the physical modelling and simulation elements based on commercially available software packages. Within limitations imposed by the software we used, the applicability testing by carrying out simulations of virtual human-product interaction during the use of a product proved that human-artefact interaction could be simulated with sufficient fidelity based on resource-integrated models,. It also provided useful knowledge on the improvements needed to develop a full-fledged dedicated simulation package.","product design; virtual prototyping; hybrid simulation; use process; nucleus-based modelling; scenar-ios; grasping simulation","en","conference paper","TU Delft","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:0c8b093c-2d86-45e6-8fc4-710782b63831","http://resolver.tudelft.nl/uuid:0c8b093c-2d86-45e6-8fc4-710782b63831","Process-based and Surrogate Modelling of Fine Sediment Transport in the Dutch Coastal Zone","Kai, C.","","2009","Coastal zones which are known as the interface between continents and oceans are vital and important to human beings because a majority of the world's population live in such zones (Nelson, 2007). Coastal systems are among the most dynamic and energetic environments on earth and they are continuously changing because of the dynamic interaction between the oceans and the land. Dronkers (2005) described coasts as multiform, infinitely complex, quasi-fractal, always changing and unpredictable. Sediment process, especially fine sediment transportation is a very complicated feature in many coastal zones as it is affected by physical dynamics, tide, wave, wind and their mutual interactions. Waves and winds along the coast are both eroding rocks and depositing sediments continuously, and the rates of erosion and deposition vary considerably from day to day. Tidal currents also have great effects on sediment transportation. Sedimentation causes many problems in coastal systems. Fine suspended sediment affects local morphology in coastal rivers, estuaries and shelves environments. Fluid mud, a high concentration aqueous suspension of fine sediment, impedes navigation, reduces water quality and causes environmental damages (Sowed, 2008). So it is crucial and of great interests for coastal engineers and water managing authorities to improve understanding of the underlying sedimentation processes and then further to carry out plans for water management, coastal protection, channel maintenance, land reclamation and dredging of deepwater navigational channels, etc. Along the Dutch Coast, a lot of efforts have been made to improve the prediction and understanding of sediment transport processes. Process-based models such as SOBEK and Delft3D of Deltares have been proved to be useful in simulation of 2D/3D sediment processes in the Dutch coastal areas. Delft3D solves shallow water equations and transport equations for salinity and suspended particulate matter (SPM) numerically by using a finite-difference scheme. Delft3D was used to build both large-scale and small-scale models to predict SPM concentrations and siltation rates in the Dutch coastal zones. For example, Van Kessel et al. (2007) built model of the Southern North Sea and Li (2007) built a local model focused on the mouth of River Rhine. The results from both models were satisfactory. However, simulating sediment transportations with process-based models is often quite time consuming, which restricts process-based model for widely applications. More detailed information will be introduced in Chapter 1.3. Data-driven models (DDM) have also been used in simulation of sediment processes (Bhattacharya et al., 2006). They are based on limited knowledge of physical processes and rely on the data describing input and output characteristics. Data driven techniques are used in building models to solve mathematical equations from the analysis of concurrent input and output time series instead of the analysis of physical processes. Solomatine and Ostfeld (2008) described that the model works on the basis of connections between the system state variables (input, internal and output variables) without considering too much on assumptions about the natural processes of the system.","Dutch coast; North sea; wadden sea; coastal sediment; currents; morphology; sedimentation processes; process based models; SOBEK; Delft3D; shallow water; suspended particulate matter; SPM; river Rhine; data driven models; DDM; morphological; meteorological; hybrid modelling; Delft Cluster; CT05.20; Noordzee & kust; CT05.24.11; morfodynamiek van Noordzee en kust en kustverdediging","en","report","Delft Cluster","","","","","","","","","","","","",""
"uuid:ef0decdb-f710-4c89-8c32-a8e452926632","http://resolver.tudelft.nl/uuid:ef0decdb-f710-4c89-8c32-a8e452926632","A Case Study: Application of the Systems Engineering Modeling in the early phases of a Complex Space System Project","Bone, M.; Cloutier, R.L.; Gill, E.K.A.; Verma, D.","","2009","There is increased recognition of the role of systems engineering in reducing the risk (technical, cost, and schedule) on complex space systems development and integration projects. A number of international systems engineering standards have been published in the last five years (ISO 15288, IEEE 1220, and EIA 632). Closer to the space domain, NASA recently updated and finalized the NASA Systems Engineering Processes and Requirements guidelines (NPR 7123.1 and NPR 7120.5). Figure 1 represents an encapsulated perspective on the key systems engineering processes and their dependencies are articulated in the new NASA NPR 7123.1. The NASA acquisition framework (Figure 2) represents their recursive (across levels) and iterative (within a level) approach to the SE process, and includes milestones and reviews, as well as updates to those events. This paper will focus on the early phases of the systems engineering process. This represents the first two System Design Processes of Figure 1, and the Pre-Systems Acquisition Phase – the Pre-Phase A, Phase A and Phase B in Figure 2. The paper will walk through a case study of a space system from the initial problem statement to defining the architectural technical risk to the program. The case study will show how early system engineering tools such as User Scenarios, Quality Function Deployment, and selection matrixes can be used in the initial system decisions to satisfy the NPR process. Then Systems Engineering Modeling will be illustrated in the context of a space systems case study [2]. Unique concepts such as active and passive stakeholders, and stakeholder capabilities and characteristics will be articulated to reduce the risk of misalignment between stakeholder expectations and technical system requirements. A framework for articulating a defined space mission into a set of well expressed and aligned technical requirements will be presented that satisfies the NPR process.","System Engineering Process; milestone; modeling; Space System; NPR","en","conference paper","Research School of Systems Engineering, Loughborough University","","","","","","","","Aerospace Engineering","Space Engineering","","","",""
"uuid:8847b378-74ba-46fb-81d8-f9cd98a234d7","http://resolver.tudelft.nl/uuid:8847b378-74ba-46fb-81d8-f9cd98a234d7","A filtered convolution method for the computation of acoustic wave fields in very large spatiotemporal domains","Verweij, M.D.; Huijssen, J.","","2009","The full-wave computation of transient acoustic fields with sizes in the order of 100x100x100 wavelengths by 100 periods requires a numerical method that is extremely efficient in terms of storage and computation. Iterative integral equation methods offer a good performance on these points, provided that the recurring spatiotemporal convolutions are computed with a coarse sampling and relatively few computational operations. This paper describes a method for the numerical evaluation of very large-scale, four-dimensional convolutions that employs a fast Fourier transformation and that uses a sampling rate close to or at the limit of two points per wavelength and per period. To achieve this, the functions involved are systematically filtered, windowed, and zero-padded with respect to all relevant coordinates prior to sampling. The method is developed in the context of the Neumann iterative solution of the acoustic contrast source problem for an inhomogeneous medium. The implementation of the method on a parallel computer is discussed. The obtained numerical results have a relative root mean square error of a few percent when sampling at two points per wavelength and per period. Further, the results prove that the method enables the computation of transient fields in the order of the indicated size.","acoustic field; acoustic signal processing; convolution; fast Fourier transforms; filtering theory; integral equations; iterative methods; parallel processing; physics computing","en","journal article","Acoustical Society of America","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","",""
"uuid:ca7a680f-4943-4c2a-bd17-4993abbcee80","http://resolver.tudelft.nl/uuid:ca7a680f-4943-4c2a-bd17-4993abbcee80","Estimating and correcting the amplitude radiation pattern of a virtual source","Van der Neut, J.; Bakulin, A.","","2009","In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface and recorded by an array of receivers in a borehole. The quality of the VS data depends on the radiation pattern of the virtual source, which in turn is controlled by the spatial aperture of the surface source distribution. Theory suggests that when the receivers are surrounded by multi-component sources completely filling a closed surface, then the virtual source has an isotropic radiation pattern and VS data possess true amplitudes. In practical applications, limited sourceaperture and deployment of a single source type create an anisotropic radiation pattern of the virtual source, leading to distorted amplitudes. This pattern can be estimated by autocorrelating the spatial Fourier transform of the downgoing wavefield in the special case of a laterally invariant medium. The VS data can be improved by deconvolving the VS data with the estimated amplitude radiation pattern in the frequency-wavenumber domain. This operation alters the amplitude spectrum but not the phase of the data. We can also steer the virtual source by assigning it a new desired amplitude radiation pattern, given sufficient illumination exists in the desired directions. Alternatively, time-gating the downgoing wavefield before crosscorrelation, already common practice in implementing the VS method, can improve the radiation characteristics of a virtual source.","amplitude estimation; deconvolution; Fourier transforms; geophysical signal processing; geophysical techniques; seismic waves","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:a8e5b498-557e-4c8b-a05a-77c5046a4493","http://resolver.tudelft.nl/uuid:a8e5b498-557e-4c8b-a05a-77c5046a4493","Correlation inequalities and applications to vector-valued Gaussian random variables and fractional Brownian motion","Veraar, M.","","2009","","Correlation inequalities; Gebeleins inequality; Gaussian random variables; Maximal inequalities; Law of large numbers; Type and cotype; Gaussian processes; Fractional Brownian motion; BesovOrlicz spaces; Sample path; Non-separable Banach space","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:f8c08807-6c08-4ea5-9be3-87fec54423ad","http://resolver.tudelft.nl/uuid:f8c08807-6c08-4ea5-9be3-87fec54423ad","Virtual reflector representation theorem (acoustic medium)","Poletto, F.; Wapenaar, C.P.A.","","2009","The virtual reflector method simulates new seismic signals by processing traces recorded by a plurality of sources and receivers. The approach is based on the crossconvolution of the recorded signals and makes it possible to obtain the Green’s function of virtual reflected signals as if in the position of the receivers (or sources) there were a reflector, even if said reflector is not present. This letter presents the virtual reflector theory based on the Kirchhoff integral representation theorem for wave propagation in an acoustic medium with and without boundary and a generalization to variable reflection coefficients for scattered wavefields.","acoustic signal processing; boundary-value problems; Green's function methods; seismic waves","en","journal article","Acoustical Society of America","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:a9329307-4db6-4ac1-8d60-7130d9030c3b","http://resolver.tudelft.nl/uuid:a9329307-4db6-4ac1-8d60-7130d9030c3b","On the conceptual design of large-scale process & energy infrastructure systems integrating flexibility, reliability, availability, maintainability and economics (FRAME) performance metrics","Ajah, A.N.","Weijnen, M.P.C. (promotor); Grievink, J. (promotor); Herder, P.M. (promotor)","2009","The environment in which large-scale process and energy infrastructure systems operate is becoming more dynamic and subject to various uncertainties and disturbances. These challenge the engineers and designers to provide solutions and designs that are not only adaptive to a wide range of future conditions and requirements but are reliable.In addressing these challenges, the research methodologically explores how the key performance metrics- Flexibility, Reliability, Availability, Maintainability and Economics (FRAME) can be integrated early in the conceptual design phase of large-scale process and energy infrastructure systems. Novel, structured and systematic conceptual frameworks and mathematical models, for integrating these metrics early in the conceptual design process have been formulated. Solution methods for these mathematical models have been explored. And their applicability, utility and relevance have been demonstrated through thoughtfully designed contemporary process and energy infrastructure systems.","conceptual design; process systems; infrastructure systems; multi-objective optimization; multi-state systems; frame","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:65e09cf9-573f-4839-a671-8edcba31eda4","http://resolver.tudelft.nl/uuid:65e09cf9-573f-4839-a671-8edcba31eda4","All for one: Factors for alignment of inter-dependent business processes at KLM and Schiphol","Perié, R.P.","Santema, S.C. (promotor)","2008","As airline and hub competition becomes fiercer, airline-airport co-operation becomes a necessary option for both main carrier airlines and hub airports to face this competition together. The inter-dependency between airlines and airports in producing air-transport services is tight, i.e. their destinies are inter-twined. Their existence as viable economic entities depends upon market performance of each other. This leads to the assumption that the relation of airlines' airports serves as an example case for dyadic alignment. Although research has been carried out regarding many forms of co-operation, little is known about specifically alignment at the business process level. By alignment of their inter-dependent dyadic business processes competitive advantage can be obtained; both KLM and AAS have acknowledged this. The aim of this research is to determine Factors for Alignment for specific inter-dependent business processes at KLM and AAS. For research purposes the research question is formulated as follows: Which are the factors for alignment of dyadic business processes at KLM and AAS? Answers to this research question are to increase the understanding of the effect of different factors upon alignment. This research has a theoretical as well as a practical value. It develops a theoretical Delft Factors for Alignment (DFA) model. This enables subsequent development of analysis tools that quantitatively and qualitatively measure the performance of Factors for Alignment. For practical purposes, it identifies issues and maps differences and similarities present between KLM and AAS within their specific dyadic business processes. These dyadic processes are Environmental Capacity, Network Planning, Infrastructure Planning and Aircraft Stand Allocation. This research is based upon the assumption that alignment of the dyadic business processes of KLM and AAS is achieved by addressing the issues affecting alignment regarding various subjects within each business process, as indicated by employees of these firms. By making use of interviews and questionnaires within both firms it is found that the issues present within four dyadic business processes of these firms, at three different levels of decision making, can be modeled by the developed DFA model. The model identifies the most potential of Factors for Alignment of their dyadic business processes. It is proven that the DFA model is a diagnostic tool in finding the Factors for Alignment of dyadic business processes of KLM and AAS by creating a structured ordering of the issues by interviews and questionnaires. The research question, as formulated above, is answered by primary and secondary Factors for Alignment per business process. This also implies that the DFA model is effective for analysis of dyadic business processes. The research methodology has proven to be viable. This would encourage application for research of other dyadic business processes at KLM and AAS, which could also strengthen their competitive advantage.","factors; alignment; business processes; dyads; inter-dependency; airline-airport relationship","en","doctoral thesis","","","","","","","","","Aerospace Engineering","","","","",""
"uuid:fccdd752-4b9e-4ff8-827c-ba14709cde31","http://resolver.tudelft.nl/uuid:fccdd752-4b9e-4ff8-827c-ba14709cde31","Universality for distances in power-law random graphs","Van der Hofstad, R.; Hooghiemstra, G.","","2008","We survey the recent work on phase transition and distances in various random graph models with general degree sequences. We focus on inhomogeneous random graphs, the configuration model, and affine preferential attachment models, and pay special attention to the setting where these random graphs have a power-law degree sequence. This means that the proportion of vertices with degree k in large graphs is approximately proportional to k?? for some ?>1. Since many real networks have been empirically shown to have power-law degree sequences, these random graphs can be seen as more realistic models for real complex networks than classical random graphs such as the Erd?s–Rényi random graph. It is often suggested that the behavior of random graphs should have a large amount of universality, meaning, in this case, that random graphs with similar degree sequences share similar behavior. We survey the available results on graph distances in power-law random graphs that are consistent with this prediction.","complex networks; graph theory; phase transformations; random processes","en","journal article","American Institute of Physics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:6df3fd6e-039c-4554-a771-868fc807fc7f","http://resolver.tudelft.nl/uuid:6df3fd6e-039c-4554-a771-868fc807fc7f","Brightness measurements of a gallium liquid metal ion source","Hagen, C.W.; Fokkema, E.; Kruit, P.","","2008","The virtual source size of a liquid metal ion source is an order of magnitude larger than the size of the region from which the ions are emitted at the source. This source size has a direct effect on the reduced brightness and, hence, on the performance of these sources. The variation of the virtual source size of a gallium liquid metal ion source as a function of the angular current density at the source has been measured. This was done by measuring the source image size from images of a pencil lead sample taken with an FEI focused ion beam system. The measurements indicate that the virtual source size grows from about 50–80?nm when the emission current increases from 1?to?10??A. The experimental data on the virtual source size are compared with the theory on stochastic Coulomb interactions in the source region. On the basis of these measurements the authors show that the reduced brightness deteriorates with an increasing angular current density. The maximum reduced brightness measured was 1×106?A/(m2?sr?V).","brightness; current density; focused ion beam technology; liquid metal ion sources; stochastic processes; Gallium; graphite","en","journal article","American Vacuum Society","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","",""
"uuid:f6c5af83-0212-48d6-82ee-71e79bc8b2d5","http://resolver.tudelft.nl/uuid:f6c5af83-0212-48d6-82ee-71e79bc8b2d5","Computation of reinforcement for solid concrete","Hoogenboom, P.C.J.; De Boer, A.","","2008","Reinforcement in a concrete structure is often determined based on linear elastic stresses. This paper considers computation of the required reinforcement when these stresses have been determined by the finite element method with volume elements. Included are both tension reinforcement and compression reinforcement, multiple load combinations and crack control in the serviceability limit state. Results are presented of seventeen stress state examples.","reinforcement design; thee-dimensional stresses; optimisation; FEM; post processing","en","journal article","","","","","","","","","Civil Engineering and Geosciences","Design and Construction","","","",""
"uuid:22b90d92-9e63-4fa0-8d6e-2106912e23f9","http://resolver.tudelft.nl/uuid:22b90d92-9e63-4fa0-8d6e-2106912e23f9","Preprocessing of gravity gradients at the GOCE high-level processing facility","Bouman, J.; Rispens, S.; Gruber, T.; Koop, R.; Schrama, E.; Visser, P.; Tscherning, C.C.; Veicherts, M.","","2008","One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection, the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10?3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10?2 level with this method.","GOCE; High-level processing facility; Gravity gradients; Preprocessing; Calibration","en","journal article","Springer","","","","","","","","Aerospace Engineering","Delft Institute of Earth Observation and Space Systems, DEOS","","","",""
"uuid:60cf4ba1-732e-4786-bcc6-3f211b94844e","http://resolver.tudelft.nl/uuid:60cf4ba1-732e-4786-bcc6-3f211b94844e","An exploration towards a more sustainable process for dimethyl naphthalene-2,6-dicarboxylate over acidic zeolites","Bouvier, C.P.","Buijs, W. (promotor)","2008","This thesis describes the challenge to apply a breakthrough in the synthesis of acidic zeolitic catalysts in the development of a sustainable process for dimethyl naphthalene-2,6-dicarboxylate. BiModal POrous Materials (BIPOMs) are zeolitic materials, which provide highway access to confined catalytic sites, thus allowing selective reactions with high rates. Much attention was paid to the selection of a suitable model system. It should be representative for a real existing industrial problem with sustainability, while at the same showing the potential of the new catalytic approach. Diisopropylation of naphthalene was chosen as a model system. There are numerous claims in literature for shape-selective diisopropylation of naphthalene with H-mordenite, but the reaction rate is too low to allow industrial application. Furthermore Kureha operates an industrial process for the production of a mixture of diisopropylnaphthalenes, and finally SRI has published a process cost study on Amoco technology leading to 2,6-dimethylnaphthalene carboxylate. Contrary to the claims in literature, the reaction was not shape selective with H-mordenite, but controlled by the relative stability of the isomeric diisopropylnaphthalenes. However the applicability of the BIPOM concept turned out to be very successful. The BIPOM catalyst not only showed a >200 times increase in yield compared to its parent normal zeolite, but also showed a significant yield increase compared to the best available zeolitic catalyst (H-USY). Explorative crystallisation experiments indicate that the production of pure 2,6-diisopropylnaphthalene seems possible. Thus a new process for the industrial production of dimethyl naphthalene-2,6-dicarboxylate was designed and evaluated. The base case scenario, though slightly better than the Amoco-technology, is still not economically attractive. However in close analogy to the existing Kureha process, the higher yield scenario seems realistic, leading to an ROI of ~ 12%, close to the limit of an economically attractive process. The final conclusion of this work is that violation of the atomic efficiency, by inherently loosing 4 carbons out of 6, cannot be compensated by an otherwise excellent catalytic concept.","heterogeneous acid catalysis; dipn; process evaluation; sustainability","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:015bf386-811f-4587-b50b-9bac69f19699","http://resolver.tudelft.nl/uuid:015bf386-811f-4587-b50b-9bac69f19699","Passive seismic interferometry by multidimensional deconvolution","Wapenaar, C.P.A.; Van der Neut, J.R.; Ruigrok, E.N.","","2008","We introduce seismic interferometry of passive data by multidimensional deconvolution (MDD) as an alternative to the crosscorrelation method. Interferometry by MDD has the potential to correct for the effects of source irregularity, assuming the first arrival can be separated from the full response. MDD applications can range from reservoir imaging using microseismicity to crustal imaging with teleseismic data.","deconvolution; geophysical techniques; multidimensional signal processing; seismology","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:f3f7447c-a3b3-46f0-b555-4e6bfc54818e","http://resolver.tudelft.nl/uuid:f3f7447c-a3b3-46f0-b555-4e6bfc54818e","DRIE and Bonding Assisted Low Cost MEMS Processing of In-plane HAR Inertial Sensors","Rajaraman, V.; Makinwa, K.A.A.; French, P.J.","","2008","We present a simple, flexible and low cost MEMS fabrication process, developed using deep reactive ion etching (DRIE) and wafer bonding technologies, for manufacturing in-plane high aspect ratio (HAR) inertial sensors. Among examples, the design and fabrication results of a two axis inertial device are presented. Fabricated device thickness ranged up to 140 ?m and a HAR of 28 was obtained. Compared to the existing approaches reported in literature, the salient features of the presented process are: single-sided single-wafer processing using just two lithographic masks, capability to fabricate standalone MEMS as well as CMOS compatible MEMS post-processing via process variations, the use of plasma etching for wafer thinning that facilitates stictionless dry-release of MEMS, and its suitability for batch processing.","Deep Reactive Ion Etching (DRIE); High Aspect Ratio MEMS; Inertial Sensors; Accelerometer; Dry MEMS Release; CMOS Compatible Post-processing; Adhesive Wafer Bonding; Plasma Wafer Thinning","en","conference paper","IEEE","","","","","","","2010-09-22","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","",""
"uuid:b7ea91b3-12e0-4b4f-b71b-7428b0e67054","http://resolver.tudelft.nl/uuid:b7ea91b3-12e0-4b4f-b71b-7428b0e67054","High-throughput technologies for bioseparation process development","Ahamed, T.","Van der Wielen, L.A.M. (promotor)","2008","","protein purification; process design; bioseparation","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:8144d042-021f-4fa0-809b-1e5b3b72647b","http://resolver.tudelft.nl/uuid:8144d042-021f-4fa0-809b-1e5b3b72647b","Product sounds: Fundamentals and application","Ozcan-Vieira, E.","Jacobs, J.J. (promotor)","2008","Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about the act of hearing or a sensory response to an acoustical stimulus (e.g., this is a loud and sharp sound). A complimentary and meaningful relationship exists between a product and its sounds. The bases for this complimentary relationship is the focus of this thesis. In other words, meaningful associations of product sounds are investigated from a human perspective. Empirical findings indicate that sound is regarded as an integral property of a product. Thus product, as the sound source, determines the majority of the meaningful associations. Context, in which a product is presented, also influences meaning attribution. The result of the meaning attribution is often a product label, a description of an event, or determining the purpose of the sound. Consequently, a well-designed sound should be typical to the product, be informative about the product's operation cycle, and convey implicit/explicit characteristics of the product. The thesis translates the empirical findings into guidelines for designers. In addition, tools and methods are proposed to support designers in their sound related activities.","product sounds; product experience; semantics; cognitive processes; perception; cognition; auditory cognition; product design; sound design; designers; tools; methods; pictograms","en","doctoral thesis","","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:b2b33664-6da1-418f-9af4-5ab98fdd439d","http://resolver.tudelft.nl/uuid:b2b33664-6da1-418f-9af4-5ab98fdd439d","Acquisition geometry analysis in complex 3D media","Van Veldhuizen, E.J.; Blacquiere, G.; Berkhout, A.J.","","2008","Increasingly, we must deal with complex subsurface structures in seismic exploration, often resulting in poor illumination and, therefore, poor image quality. Consequently, it is desirable to take into consideration the effects of wave propagation in the subsurface structure when designing an acquisition geometry. We developed a new, model-based implementation of the previously introduced focal-beam analysis method. The method's objective is to provide quantitative insight into the combined influence of acquisition geometry, overburden structure, and migration operators on image resolution and angle-dependent amplitude accuracy. This is achieved by simulation of migrated grid-point responses using focal beams. Note that the seismic response of any subsurface can be composed of a linear sum of grid-point responses. The focal beams have been chosen because any migration process represents double focusing. In addition, the focal source beam and focal detector beam relate migration quality to illumination properties of the source geometry and sensing properties of the detector geometry, respectively. Wave-equation modeling ensures that frequency-dependent effects in the seismic-frequency range are incorporated. We tested our method by application to a 3D salt model in the Gulf of Mexico. Investigation of well-sampled, all-azimuth, long-offset acquisition geometries revealed fundamental illumination and sensing limitations. Further results exposed the shortcomings of narrow-azimuth data acquisition. The method also demonstrates how acquisition-related amplitude errors affect seismic inversion results.","data acquisition; geophysical prospecting; geophysical signal processing; seismology","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:4ce33f95-c280-47f2-b443-d39eb24f7eea","http://resolver.tudelft.nl/uuid:4ce33f95-c280-47f2-b443-d39eb24f7eea","The spatial data-adaptive minimum-variance distortionless-response beamformer on seismic single-sensor data","Panea, I.; Drijkoningen, G.G.","","2008","Coherent noise generated by surface waves or ground roll within a heterogeneous near surface is a major problem in land seismic data. Array forming based on single-sensor recordings might reduce such noise more robustly than conventional hardwired arrays. We use the minimum-variance distortionless-response (MVDR) beamformer to remove (aliased) surface-wave energy from single-sensor data. This beamformer is data adaptive and robust when the presumed and actual desired signals are mismatched. We compute the intertrace covariance for the desired signal, and then for the total signal (desired signal+noise) to obtain optimal weights. We use the raw data of only one array for the covariance of the total signal, and the wavenumber-filtered version of a full seismic single-sensor record for the covariance of the desired signal. In the determination of optimal weights, a parameter that controls the robustness of the beamformer against an arbitrary desired signal mismatch has to be chosen so that the results are optimal. This is similar to stabilization in deconvolution problems. This parameter needs to be smaller than the largest eigenvalue provided by the singular value decomposition of the presumed desired signal covariance. We compare results of MVDR beamforming with standard array forming on single-sensor synthetic and field seismic data. We apply 2D and 3D beamforming and show prestack and poststack results. MVDR beamformers are superior to conventional hardwired arrays for all examples.","array signal processing; covariance analysis; geophysical prospecting; geophysical signal processing; seismology; signal denoising; singularalue decomposition","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:41ef3c93-d6f2-4b56-aa77-ee4fd5cfaaa3","http://resolver.tudelft.nl/uuid:41ef3c93-d6f2-4b56-aa77-ee4fd5cfaaa3","Parallel Scalability of Video Decoders","Meenderinck, C.; Azevedo, A.; Juurlink, B.; Alvarez Mesa, M.; Ramirez, A.","","2008","An important question is whether emerging and future applications exhibit sufficient parallelism, in particular thread-level parallelism, to exploit the large numbers of cores future chip multiprocessors (CMPs) are expected to contain. As a case study we investigate the parallelism available in video decoders, an important application domain now and in the future. Specifically, we analyze the parallel scalability of the H.264 decoding process. First we discuss the data structures and dependencies of H.264 and show what types of parallelism it allows to be exploited. We also show that previously proposed parallelization strategies such as slice-level, frame-level, and intra-frame macroblock (MB) level parallelism, are not sufficiently scalable. Based on the observation that inter-frame dependencies have a limited spatial range we propose a new parallelization strategy, called Dynamic 3D-Wave. It allows certain MBs of consecutive frames to be decoded in parallel. Using this new strategy we analyze the limits to the available MB-level parallelism in H.264. Using real movie sequences we find a maximum MB parallelism ranging from 4000 to 7000. We also perform a case study to assess the practical value and possibilities of a highly parallelized H.264 application. The results show that H.264 exhibits sufficient parallelism to efficiently exploit the capabilities of future manycore CMPs.","H.264; Chip multiprocessors; Scalability; Parallel processing; Video codecs","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics and Computer Engineering","","","",""
"uuid:841c4886-e971-40fb-b0e4-e7ef0deeda56","http://resolver.tudelft.nl/uuid:841c4886-e971-40fb-b0e4-e7ef0deeda56","Universality for the Distance in Finite Variance Random Graphs","Van den Esker, H.; Van der Hofstad, R.; Hooghiemstra, G.","","2008","We generalize the asymptotic behavior of the graph distance between two uniformly chosen nodes in the configuration model to a wide class of random graphs. Among others, this class contains the Poissonian random graph, the expected degree random graph and the generalized random graph (including the classical Erdos-Renyi graph). In the paper we assign to each node a deterministic capacity and the probability that there exists an edge between a pair of nodes is equal to a function of the product of the capacities of the pair divided by the total capacity of all the nodes. We consider capacities which are such that the degrees of a node have uniformly bounded moments of order strictly larger than two, so that, in particular, the degrees have finite variance. We prove that the graph distance grows like log(nu) N, where the nu depends on the capacities and N denotes the size of the graph. In addition, the random fluctuations around this asymptotic mean log(nu) N are shown to be tight. We also consider the case where the capacities are independent copies of a positive random Lambda with P (Lambda>x) <= cx(1-tau), for some constant c and tau > 3, againg resulting in graphs where the degrees have finite variance. The method of proof of these results is to couple each member of the class to the Poissonian random graph, for which we then give the complete proof by adapting the arguments of van der Hofstad et al.","Random Graphs; Graph distances; Inhomogeneous random graphs; Coupling; Branching processes; Universality","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:cafa3d8a-802b-45be-8f2b-26d94bb246ba","http://resolver.tudelft.nl/uuid:cafa3d8a-802b-45be-8f2b-26d94bb246ba","Spatial Nonhomogeneous Poisson Process in Corrosion Management","López De La Cruz, J.; Kuniewski, S.P.; Van Noortwijk, J.M.; Guriérrez, M.A.","","2008","A method to test the assumption of nonhomogeneous Poisson point processes is implemented to analyze corrosion pit patterns. The method is calibrated with three artificially generated patterns and manages to accurately assess whether a pattern distribution is random, regular, or clustered. The interevent and the nearest-neighbor statistics are employed to check the method's performance. Three empirical corrosion patterns are studied. The outcome of this investigation suggests that maximum pit depths are generally encountered where pit clusters are detected. This result is in agreement with previous studies.","corrosion; electrochemistry; random processes; statistics","en","journal article","The Electrochemical Society","","","","","","","","Delft University of Technology","","","","",""
"uuid:158f5c7b-5b60-4636-beef-69007efe7986","http://resolver.tudelft.nl/uuid:158f5c7b-5b60-4636-beef-69007efe7986","Controlling enigneering-to-order processes in shipbuilding, a model-based approach","Coenen, J.M.G.","Nienhuis, U. (promotor)","2008","Engineering-to-Order processes in shipbuilding are characterised by their results: unique ships built on customer specification. A challenge is the control of such processes: the combination of complex technical solutions, a large quantity of specialist engineers of different parties, highly interrelated âconcurrentâ tasks, continuous exchange of information and also stochastic events make that current planning and management tools do not suffice. This research describes a first exploration in the field of modelling Engineering-to-Order processes, in order to obtain insight in fundamental process behaviour. This with the purpose to improve process control tools in the future. Innovative aspects of this research lie in the following fields: -detailed modelling of abstract engineering processes with their unique features -a fast configuration method for the generation of ship specific process models -integration of process diagrams for representation of models and simulation models -the introduction of simulation-based, stochastic planning in shipbuilding.","engineering-to-order; shipbuilding; simulation; planning; processes","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:6ab19a45-2654-4256-8170-c0ffeff5eaa3","http://resolver.tudelft.nl/uuid:6ab19a45-2654-4256-8170-c0ffeff5eaa3","Development and evaluation of ultra high pressure waterjet cutting","Susuzlu, T.","Karpuschewski, B. (promotor)","2008","Abrasive waterjet (AWJ) cutting is a machining process to cut wide range of materials from soft materials such as rubber, leather to hard materials such as metals by means of a high-velocity slurry jet, formed as a result of injecting abrasive particles into a waterjet. The machining action is the result of these particles impacting against a workpiece with a high velocity. Conventional AWJ equipments generate water pressures up to 400 MPa (=4000 bar = 58000 psi) and use orifices whose diameters are in the range of 0.08 mm to 1 mm to generate plain waterjet. The abrasive particles of sizes 0.07 mm to 0.36 mm in diameter entrain to the jet former with air and mix with the waterjet in the mixing chamber to form the three phase slurry jet. The abrasive particles are accelerated and focused in the focusing tube. The width of the focusing tube determines the cutting width which is in the range of 0.5 mm to 1.5 mm in diameter. This study investigates the applicability and the performance of waterjet (WJ) and AWJcutting process beyond 400 MPa water pressure, which is called ultra-high pressures during the study. One of the objectives is to expand the application domain of the process. With higher water pressure, plain WJ is capable of cutting harder materials and it is possible to cut intricate details with AWJ due to the availability of high energy density with small orifices. Moreover, reduction in cutting costs is expected as a result of higher feed speeds or reduced abrasive consumption. The initial focus of the research is to provide guidelines to develop a reliable AWJ cutting system above 400 MPa. It was shown that the plastic deformation takes place at the thick walled cylinders subjected to internal pressure of more than 700 MPa with the current types of materials that are generally used in high pressure components. Therefore, imposing compressive residual stress to the bore of the cylinder is necessary for the parts such as high pressure intensifier cylinder where plastic deformation is unacceptable. Autofrettage and multi layer construction are the two techniques to create residual stresses in the cylinder. An optimum autofrettage pressure exists due to the Bauschinger effect. Therefore, a multi layer cylinder construction provides cylinders with higher pressure capacity. A simplified model for predicting the pressure output for double acting pressure intensifiers is presented after the design considerations for thick walled cylinders in order to estimate the required attenuator volume and the high pressure cylinder dimensions to limit the pressure fluctuations. The model is in good agreement with the pressure measurements. The energy conversions and the related efficiencies during the AWJ formation process provide a perspective to the performance of the process. Energy density of the plain and abrasive waterjet is defined to correlate with the cutting performance. Reducing the focusing tube diameter and increasing the water pressure are the most beneficial methods to increase the energy density. Other methods such as increasing the orifice diameter or reducing the feed speed are in conflict with cutting intricate details and economical considerations. With the insight gained at the previous step, the performances of plain and abrasive waterjet are evaluated. The increase in pressure results in a more scattered jet. A diverted jet generates wider cuts with wider damaged zones and rounded edges in WJ cutting. In AWJ cutting, it accelerates the wear of the entry region of the focusing nozzle. The experiments show that the length and diameter of upstream tube play an important role in jet quality. The turbulences in the flow are reduced in the upstream tube. It should be sufficiently large to make the flow laminar. Moreover, when the streamlines are guided towards the orifice with a conical seal, the resultant jet disintegrates later. After the quality of the jet is ensured, the cutting performance tests are conducted. The maximum feed rate increases more than the hydraulic power of the waterjet which shows that increasing the pressure leads to a more power efficient process. Moreover, at the same hydraulic power, the smaller jets perform better. On the other hand, the increase in depth of cut with pressure is directly proportional with the hydraulic power. It is proposed that the depth of cut is directly proportional with the energy density of the jet. However, at the low feed rates the relation is no longer linear. Therefore, the feed rate term of the energy density equation is modified to predict the depth of cut. Due to the cutting mechanism of the plain waterjet, the surface quality is poor with burrs in the case of metals and fiber damages in the case of composite materials. Therefore, plain waterjet cutting is suitable for separating instead of precision cutting of metal sheets and composites. The mixing and acceleration of the particles with a waterjet determine the cutting ability of AWJ. It becomes less efficient at high abrasive loads. The momentum from the plain waterjet to the abrasives transfers more efficient with the increase in pressure at the same abrasive load ratio. The efficiency of power transferred from the water to the abrasives decreases when the abrasive load ratio exceeds 0.3 for low focusing tube to orifice diameter (df / do) ratios and 0.4 for high df /do ratio. The optimum abrasive flow rate does not depend on pressure. As it was in the plain waterjet, the energy density of the jet correlates well with the cutting ability of the jet. The linear relation between these parameters becomes non-linear at high energy densities due to the increased energy losses at longer traveling lengths through the material at higher depths of cut. The final consideration of this study is the economical aspects of ultra-high pressure AWJ cutting. The cost advantage depends on the increase of the cost of the pump, the maintenance and the life of the consumables with the pressure increase. The pressure increase is cost effective if the investment costs and maintenance costs are below certain values. If several engineering issues such as the lifetimes of the critical components and the availability of wear resistant sufficiently long focusing tubes are solved, the ultra-high abrasive waterjet cutting can be implemented successfully to industrial applications.","waterjet; abrasive waterjet; autofrettage; high pressure; process model; energy efficiency","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","PMA","","","",""
"uuid:a3e063bf-742b-4a86-b952-93d51be0b775","http://resolver.tudelft.nl/uuid:a3e063bf-742b-4a86-b952-93d51be0b775","Sign Language Recognition by Combining Statistical DTW and Independent Classification","Lichtenauer, J.F.; Hendriks, E.A; Reinders, M.J.T.","","2008","To recognize speech, handwriting, or sign language, many hybrid approaches have been proposed that combine Dynamic Time Warping (DTW) or Hidden Markov Models (HMMs) with discriminative classifiers. However, all methods rely directly on the likelihood models of DTW/HMM. We hypothesize that time warping and classification should be separated because of conflicting likelihood modeling demands. To overcome these restrictions, we propose using Statistical DTW (SDTW) only for time warping, while classifying the warped features with a different method. Two novel statistical classifiers are proposed—Combined Discriminative Feature Detectors (CDFDs) and Quadratic Classification on DF Fisher Mapping (Q-DFFM)—both using a selection of discriminative features (DFs), and are shown to outperform HMM and SDTW. However, we have found that combining likelihoods of multiple models in a second classification stage degrades performance of the proposed classifiers, while improving performance with HMM and SDTW. A proof-of-concept experiment, combining DFFM mappings of multiple SDTW models with SDTW likelihoods, shows that, also for model-combining, hybrid classification can provide significant improvement over SDTW. Although recognition is mainly based on 3D hand motion features, these results can be expected to generalize to recognition with more detailed measurements such as hand/body pose and facial expression.","time series analysis; face and gesture recognition; 3D/stereo scene analysis; statistical dynamic programming; Markov processes; classifier design and evaluation; real-time systems","en","journal article","IEEE","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:719c6058-5e69-4b5d-8646-9070394958ab","http://resolver.tudelft.nl/uuid:719c6058-5e69-4b5d-8646-9070394958ab","Observation of storm rainfall for flash-flood forecasting Volume 2 - Satellite structured algorithm system (SAS)","Delrieu, G.","TU Braunschweig","2008","This report summarizes the work done by the meteorological team of the TU Dresden within the FLOODsite Task 15 Radar and satellite observation of storm rainfall. The aim of this Task was the development of a radar and satellite Structured Algorithm System (SAS) for quantitative precipitation estimation (QPE) at the space and timescales of interest for flash-flood analysis and prediction. Thereby, the part of the TU Dresden was to develop a satellite based SAS for detecting extreme storm rainfall by using highly resolved geostationary satellite data (Meteosat-6, Meteosat-8). This has been done by building up a twofolded SAS, one part based on Meteosat-6 Rapid Scan data (M6/RS-SAS) and the second part based on Meteosat-8 data (MSG-SAS). Both parts include several rainfall estimation techniques. Three heavy precipitation events in orographic distinct and consequently flash flood prone regions (Alto Adige, Cévennes-Vivarais, Saxony) have been examined by applying these techniques with regard to the possibilities of detecting storm rainfalls by using satellite data. For validation and as a reference radar data of the co-operation partners INPG (Institut National Polytechnique de Grenoble) and UniPad (University of Padua) have been used. The Saxon event has been compared to radar data of the DWD (Deutscher Wetterdienst). To correct the estimated rain rates concerning the orographic situation, the wind and moisture conditions and the cloud growth rate additional data like MPEF products and radiosondes were included in the M6/RS-SAS. The rain rates resulting from the MSG-SAS were corrected in respect of the moisture conditions of the environment and the growing or decaying of the raining clouds.","Flash flood; Remote sensing; Satellite observation; Data processing","en","report","INPG Grenoble","","","","","","","","","","","","Floodsite",""
"uuid:c6ac9a18-dd83-4b75-9365-f6ce7343122f","http://resolver.tudelft.nl/uuid:c6ac9a18-dd83-4b75-9365-f6ce7343122f","Voorspelinstrument duurzame vaarweg: Case study fixed layer and sediment nourishment in the Bovenrijn","Yossef, M.F.M.; Zagonjolli, M.; Sloff, C.J.","","2008","","zandsuppletie; sand nourishment; sedimenttransportprocessen; sediment transport processes; Boven Rijn","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:facf8118-8cd3-4f71-b6cb-336fb83cc139","http://resolver.tudelft.nl/uuid:facf8118-8cd3-4f71-b6cb-336fb83cc139","Market intelligence for product excellence","Veldhuizen, H.G.","Hultink, E.J. (promotor); Griffin, A.J. (promotor)","2008","This research project aims to reveal the consequences of market information processing during the development of new high-tech products. Furthermore, this research project tries to identify which factors (antecedents) influence market information processing in high-tech new product development (NPD). Findings from the extant literature and interviews with NPD managers were combined to build a conceptual framework. The conceptual framework contains hypotheses on the potential antecedents and consequences of market information processing. A mail survey research method was used to gather data on 166 NPD projects and to test the hypotheses empirically. The findings of the research indicate that the acquisition, dissemination and use of market information in different stages of high-tech NPD (predevelopment, development, and commercialization stage) are both directly and indirectly associated with product advantage and NPD performance. The results further indicate that some project- and company characteristics facilitate market information processing. Increasing project priority, a higher degree of formalization at the company level, better co-operation between different departments and a market oriented culture contribute to market information processing in high-tech NPD.","market information processing; new product development; high-tech products","en","doctoral thesis","","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:f61454f3-3668-48df-82a2-6929af6bc17b","http://resolver.tudelft.nl/uuid:f61454f3-3668-48df-82a2-6929af6bc17b","Whitecapping and wave field evolution in a coastal bay","Mulligan, R.P.; Bowen, A.J.; Hay, A.E.; Van der Westhuysen, A.J.; Battjes, J.A.","","2008","Evolution of the wave field in a coastal bay is investigated, by comparison between field observations and numerical simulations using a spectral wave model (Simulating WAves Nearshore (SWAN)). The simulations were conducted for the passage of an extratropical storm, during which surface elevation spectra were bimodal owing to local wind-sea generation and swell propagation into the bay. SWAN was run in stationary and nonstationary mode for two whitecapping source term formulations. The first was developed by Komen et al. (1984) and is dependent on spectrally averaged wave steepness, and thus includes swell in the calculation of whitecapping dissipation and typically overestimates wind sea in the presence of swell. The second, proposed by van der Westhuysen et al. (2007), estimates whitecapping of wind sea locally in the wave spectrum and is not coupled to swell energy. This formulation reproduced the magnitude and shape of the observed wind-sea spectral peak much better than the previous formulation. Whitecapping dissipation rates have been estimated from observations, using the equilibrium range theory developed by Phillips (1985), and are well correlated with both wind speed and acoustic backscatter observations. These rates agree with SWAN estimates using the spectrally local expression, and provide additional physical validation for the whitecapping source term.","surface waves; nearshore processes; wave modeling","en","journal article","American Geophysical Union","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:a8560098-ce6a-4ac0-9a40-329777dab1d0","http://resolver.tudelft.nl/uuid:a8560098-ce6a-4ac0-9a40-329777dab1d0","High-resolution luminescence spectroscopy study of down-conversion routes in NaGdF4:Nd3+ and NaGdF4:Tm3+ using synchrotron radiation","Van der Kolk, E.; Dorenbos, P.; Krämer, K.; Biner, D.; Güdel, H.U.","","2008","","excited states; gadolinium compounds,; neodymium; phosphors; photoluminescence; sodium compounds; thulium; two-photon processes","en","journal article","American Physical Society","","","","","","","","Applied Sciences","Radiation, Radionuclides and Reactors","","","",""
"uuid:ffb57c20-8269-4399-878f-90cb3e71c3fd","http://resolver.tudelft.nl/uuid:ffb57c20-8269-4399-878f-90cb3e71c3fd","Sub-10 nm focused electron beam induced deposition","Van Dorp, W.F.","Kruit, P. (promotor)","2008","Work started with a critical review of literature from the past 70-odd years. The review shows that the physical processes occurring in EBID are generally well understood. By combining models for electron scattering in a solid and electron beam induced heating and knowledge of growth regimes, the majority of the experimental results was explained qualitatively. The review makes clear that several major issues remain. The fact that cross sections for electron scattering in a solid and electron-induced precursor dissociation are not well known, makes it difficult to interpret experiments where the acceleration voltage is varied. Related to this is the limited understanding of electron-induced precursor dissociation. The dissociation mechanism is one of the key factors determining the purity of the deposits and a better understanding of this process will help to develop EBID to its full potential. The growth behavior at the sub-10 nm regime was explored by writing lines and arrays of dots from W(CO)6. The smallest average values that have been found for the full width at half maximum, are 1.9 nm for lines and 0.72 nm for dots. These are world records for EBID and for the first time, it is shown that growth on this scale is determined by random processes. The deposits consist of so few molecules, that the counting statistics become visible. The result is that, despite identical conditions, deposits are not identical. The final deposited mass varies from dot to dot and dots do not nucleate exactly on the irradiated position, but randomly around it. This results in nonsymmetrical dots in the early stage of growth. More insight into the deposition process is obtained by monitoring the annular dark field signal during the growth. This revealed that the growth rate during the deposition is not constant. The method also allowed control over the growth, for instance to prevent the occurrence of a proximity effect. Atomic force microscopy measurements allowed quantification of the deposited volume. The distributions of the deposited volume as a function of dwell time bear a close similarity to Poisson distributions, which suggests that the deposited dots consist of a number of discrete units. From a fit of Poisson distributions to the volume distributions, it was concluded that the volume per unit is as small as 0.4 nm3. This volume is almost just as small as a single W(CO)6 molecule in the solid phase. The work described in this thesis opens up a whole new decade of feature sizes from 20 to sub-1 nm and brings the ultimate resolution of single molecules within reach.","electron beam induced deposition; nanometer scale; sub-10 nm; focused electron beam induced processes; scanning transmission electron microscopy; environmental microscopy; nanofabrication; electron beam lithography; poisson statistics","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:032562e6-bdf1-412d-a257-852f77f2652e","http://resolver.tudelft.nl/uuid:032562e6-bdf1-412d-a257-852f77f2652e","Collaborative architectural design in virtual reality","Hubers, J.C.","Oosterhuis, K. (promotor); Van Duin, L. (promotor)","2008","In this PhD research a method and software prototype is developed for COLlaborative Architectural Design In VIRtual reality. The method consists of developing versions of a concept for a building and the evaluation of them with criteria. Every team member makes his own versions; otherwise they would destroy each others work. They can evaluate the versions with a criteria matrix that helps quickly find the main differences in opinion. The discussion should lead to a next version where advantages of earlier versions are integrated and disadvantages eliminated. On the final version they are working simultaneously in real-time. The main conclusions of the research are: 1. It appeared to be possible to develop a working software application prototype in Virtools with which a multidisciplinary design team can collaborate in a virtual 3D environment in real-time on the Internet. 2. Only the architect in the test team appeared to be able to develop a conceptual building design with this prototype within the limited time of twice half a day. 3. The advisors in the test team need training in developing and 3D modelling of conceptual building designs. Only after that they could be able to effectively participate in collaborative design of architectural concepts based on this method and prototype.","architecture; design process; collaborative design; parametric design; design evaluation; prototyping","en","doctoral thesis","Publikatieburo Faculteit Bouwkunde TU Delft","","","","","","","","Architecture","","","","",""
"uuid:c643abed-5617-4466-809e-2031fa72f1fc","http://resolver.tudelft.nl/uuid:c643abed-5617-4466-809e-2031fa72f1fc","Automated Design of Application-Specific Smart Camera Architectures","Caarls, W.","Van Vliet, L.J. (promotor); Jonker, P.P. (promotor)","2008","Parallel heterogeneous multiprocessor systems are often shunned in embedded system design, not only because of their design complexity but because of the programming burden. Programs for such systems are architecture-dependent: the application developer needs architecture-specific knowledge to implement his algorithms, as each processor has its own characteristics and programming language. He will therefore often stick to the architectures he knows best instead of looking for the best one. This leads to suboptimal solutions, and costly redesign efforts if the chosen architecture later proves to be insufficient. Our solution to this problem uses a programming model based on the concept of architecture independence through algorithm dependence. By limiting the expressiveness of a programming language to just those concepts needed to implement a given class of algorithms, it may be compiled to a variety of different (parallel) processor architectures. We introduce a new meta-programming language that can be used to compile these algorithm-specific languages. The user program then consists of a number of algorithms written in different languages, and which are automatically mapped to the multiprocessor system, achieving architecture independence. We use this architecture independence to conduct an automated design space exploration of possible architectures, creating a Pareto front of optimal trade-offs between performance, area and power consumption. The developer can choose the final architecture from this set.","image processing; embedded systems; smart cameras; design space exploration; algorithm-specific languages; meta-programming; stream programming; architecture-independent programs","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:c64d84d8-6552-4d3c-9c02-1d511c689c43","http://resolver.tudelft.nl/uuid:c64d84d8-6552-4d3c-9c02-1d511c689c43","Scaling-Up Eutectic Freeze Crystallization","Genceli, F.E.","Witkamp, G.J. (promotor)","2008","A novel crystallization technology, Eutectic Freeze Crystallization (EFC) has been investigated and further developed in this thesis work. EFC operates around the eutectic temperature and composition of aqueous solutions and can be used for recovery of (valuable) dissolved salts (and/or or acids) and water from a wide variety of aqueous process streams. Using EFC, processes producing large quantities of saline solutions could be carried out in an ecologically and economically attractive way. An introduction and a brief summary of earlier work are given in Chapter 1. The experimental study on pilot scale Cooled Disc Column Crystallizer (CDCC-2) designed for continuous EFC operation is presented in Chapter 2. CDCC-2 was tested for an industrial MgSO4 stream and evaluated in terms of heat transfer, ice and salt sizes, production and growth rates. Application of conductivity and refractive index measurement techniques for inline concentration and supersaturation measurements of MgSO4 solution was studied in Chapter 3. Chapter 4 presents the CDCC-3 and Skid Mounted Unit, designed and constructed for 130 ton/year MgSO4.7H2O and water production capacities. MgSO4 salt crystal structure at eutectic conditions was studied and reported in Chapter 5. The MgSO4 crystal hydrate formed below approximately 0 oC was proven to be MgSO4.11H2O instead of the common reported MgSO4.12H2O. Crystal structure analysis and the molecular arrangement of these crystals were determined using single crystal X-ray diffraction. Raman spectroscopy was used for characterizing MgSO4.11H2O and for comparing the vibrational spectra with MgSO4.7H2O. Thermo gravimetric analysis confirmed the stochiometry of MgSO4.11H2O. Additionally the Miller indices of the major faces of MgSO4.11H2O crystals were defined. Chapter 6 covers the discovery of the natural occurrence of the MgSO4.11H2O new mineral -Meridianiite- as salt inclusions in sea ice from Saroma Lake-Japan and in Antarctic ice. In Chapter 7 nucleation and crystal growth of MgSO4 aqueous solution on a cooled surface were studied theoretically and experimentally. Coupled heat and mass flux equations from non-equilibrium thermodynamics (Onsager theory with reciprocal relations) were defined for crystal growth and the temperature jump at the interface of the growing crystal. Chapter 8 aims to describe the Cyclic Innovation Model (CIM) and to set a path for commercialization of the EFC technology.","eutectic freeze crystallization; magnesium sulfate; mgso4.11h2o; meridianiite; inline supersaturation determination; inline conductivity measurement; inline refractive index measurement; epsomite; xrd; x-ray diffraction; negative crystal; micro raman spectroscopy; antarctic ice; saroma lake; sea ice; mineral; coupled heat and mass transfer; non-equilibrium thermodynamics; onsager relations; irreversible thermodynamics; crystal growth on a cold surface; cyclic innovation model; cim; cooled disc column crystallizer; cdcc; interface; skid mounted unit; scale-up; crystallization; waste water treatment; aqueous process stream treatment; thermogravimetric analysis; tga","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:98d9d21d-2b15-420b-80aa-6a45b3366e62","http://resolver.tudelft.nl/uuid:98d9d21d-2b15-420b-80aa-6a45b3366e62","Packaging for consumer electronic products: The need for integrating design and engineering","Wever, R.; Boks, C.; Stevels, A.","","2008","From the perspective of a multinational corporation producing durable consumer goods sustainable packaging is packaging that fulfils the right functionalities in the most efficient way. In order to achieve this, an integral design process is required. Such an integral approach to the design of packaging for CE goods would imply a process that takes into account all requirements, whether they are technical, financial, environmental or psychological in nature, and that also incorporates the relationship between the packed product and the packaging. In this paper this approach will be defined as packaging design engineering. In business reality however, a split between packaging design and packaging engineering can be observed. Packaging engineering has to do with protection, and fulfilling the distribution functions. It is about the 3-D design, which is also referred to as structural packaging design. This is the expertise typically offered by packaging suppliers. Packaging design on the other hand, has to do with the appearance of the packaging and is related to the marketing functions. Oftentimes packaging design will be limited to 2-D graphical aspects. It is typically the part of the total packaging concept that is supplied by external packaging design agencies. The tools and methods of packaging engineering and packaging design differ substantially. This is a result of the fact that packaging engineering deals with materials and mechanical behavior, while packaging design deals with people. In practice, one can observe that for a given product either the design aspects or the engineering aspects take preference, while the other receives less attention. When striving for optimal packagingeither from an economical or from an environmental perspectivethese two aspects will have to be balanced. This paper will analyze the existing approaches in both packaging engineering and packaging design, and assess their strengths and weaknesses. The data used originates from scientific literature, case studies of design projects and interviews with both employees from a major consumer electronics firm and employees from packaging supply companies. Ways of improving the integration of the two fields will be proposed.","distribution; marketing; design process; durable goods","en","conference paper","International Association of Packaging Research Institutes","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:f9141828-b224-475c-bf12-ae9cb255aeb3","http://resolver.tudelft.nl/uuid:f9141828-b224-475c-bf12-ae9cb255aeb3","Implementation of the process interaction approach in a general-purpose language","Veeke, H.P.M.","Ottjes, J.A. (advisor)","2008","","Process; interaction","","conference paper","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Transport Engineering and Logistics","","",""
"uuid:f76670cf-9d16-419f-af13-4de225314932","http://resolver.tudelft.nl/uuid:f76670cf-9d16-419f-af13-4de225314932","A design approach for asset supply logistics","Ottjes, J.A.","Lodewijks, G. (advisor)","2008","","Asset supply logistics; transporation safety; oil & gas industry; process-interaction simulation","","conference paper","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Transport Engineering and Logistics","","",""
"uuid:a5157424-63cf-4770-9e07-20c35971a84c","http://resolver.tudelft.nl/uuid:a5157424-63cf-4770-9e07-20c35971a84c","Reconfigurable network processing platforms","Kachris, C.","Goossens, K.G.W. (promotor)","2007","This dissertation presents our investigation on how to efficiently exploit reconfigurable hardware to design flexible, high performance, and power efficient network devices capable to adapt to varying processing requirements of network applications and traffic. The proposed reconfigurable network processing platform targets mainly access, edge, and enterprise devices. These devices have to sustain less bandwidth compared to those utilized in core networks. However the processing requirements on a per packet basis are much higher in these devices (e.g., payload processing). Furthermore, devices in these networks have to be flexible in order to support emerging network applications. A promising technology for the implementation of these devices is the Field-Programmable Gate Arrays (FPGAs). FPGAs are typical devices that combine flexibility (through the reconfiguration) and performance (through the inherent hardware nature that can exploit parallelism), therefore they can efficiently address the requirements of the edge and access network devices. A reconfigurable network processing platform is presented that includes reconfigurable hardware accelerators, a reconfigurable queue scheduler, and a configurable transactional memory controller. Furthermore, the performance and the constraints of the platform are formulated as an integer optimization problem and an integrated design flow is presented for the platform. Both static and dynamic reconfiguration is explored in this dissertation. Static reconfiguration is utilized to address the different processing requirements of network applications, while dynamic reconfiguration is utilized to adapt to network traffic fluctuations. Two representative devices were implemented and evaluated in the proposed platform; a multi-service edge router and a content-based (web) switch. In the former device, dynamic reconfiguration is utilized to deal with network traffic fluctuations. The device monitors the traffic and adapts to the network traffic fluctuations taking into account the reconfiguration overhead. In the latter device, a reconfigurable architecture for a content-based switch is utilized and compared to a mainstream network processor in terms of performance and power. The device accommodates several co-processors that can be interchanged to perform specific type of switching (e.g., URL-based or cookie-based switching). Moreover, the exploitation of reconfigurable logic is investigated for queue scheduling in network devices. A reconfigurable queue scheduler is presented that adapts to the network traffic requirements (number of active queues) and can be used both in edge routers and web switches. Finally, configurable transactional memories are proposed which can be used to efficiently deploy multi-processing platforms for network processing applications. The proposed configurable transactional memory controller can be configured based on the application and device features (e.g., number of processors), can offer an easier programming framework for multi-processor reconfigurable platforms, and provides increased performance compared to traditional locking schemes. The results of the research presented in this dissertation show that the FPGAs can be an efficient alternative to network processors and can be used not only for lower network layers, but also as a complete platform for emerging network processing applications.","reconfigurable computing; network processing; fpgas","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:2eac935e-cdb1-4c0c-92d1-cbcc8dd2d867","http://resolver.tudelft.nl/uuid:2eac935e-cdb1-4c0c-92d1-cbcc8dd2d867","Markov processes for maintenance optimization of civil infrastructure in the Netherlands","Kallen, M.J.","Van Noortwijk, J.M. (promotor)","2007","The Netherlands, like many countries in this world, face a challenging task in managing civil infrastructures. The management of vital infrastructures, like road bridges, is necessary to ensure their safe and reliable functioning. The Directorate-General for Public Works and Water Management in the Netherlands manages the structures in the national road network. A large number of bridges and viaducts were constructed during the 1960's and 1970's. Due to many factors, it is difficult to determine the exact length of the remaining life of a structure. This is why the manager frequently performs inspections and registers the state of each structure in a database. A principal element of bridge management systems is the estimation of the uncertain rate of deterioration. This is usually done by using a suitable model and by using information gathered on-site during inspections. This thesis proposes a statistical and probabilistic framework, which enables the decision maker to estimate the rate of deterioration and to quantify his uncertainty about this estimate. The framework consists of a continuous-time Markov process with a finite number of states to model the uncertain rate at which the quality of structures reduces over time. The result of this research is a unified approach to modeling uncertain deterioration and decision making for optimal maintenance management. It has been succesfully applied to condition data of more than 3000 concrete structures in the Netherlands which were gathered from 1985 to 2004.","bridge; deterioration; uncertainty; Markov process; maintenance; reliability","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:9f008eae-0152-496b-8736-fbad3223d767","http://resolver.tudelft.nl/uuid:9f008eae-0152-496b-8736-fbad3223d767","Data acquisition for LTV research & monitoring 'natural development'","Troost, T.A.","","2007","","gegevensbestanden; databases; gegevensverwerking; data processing; validatie; validation; Westerschelde","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:437a4b0f-2ad2-42c3-99d8-f7c90688ed08","http://resolver.tudelft.nl/uuid:437a4b0f-2ad2-42c3-99d8-f7c90688ed08","Limiting factors for electron beam lithography when using ultra-thin hydrogen silsesquioxane layers","Grigorescu, A.E.; Van der Krogt, M.C.; Hagen, C.W.","","2007","","high resolution; electron beam resist; hydrogen silsesquioxane; electron beam nanolithography; development process","en","journal article","SPIE","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:3a092393-f5d7-4e86-aac4-57bf6555ea13","http://resolver.tudelft.nl/uuid:3a092393-f5d7-4e86-aac4-57bf6555ea13","Electronic cleansing for visualization in CT colonography","Serlie, I.W.O.","Van Vliet, L.J. (promotor)","2007","In this thesis visualization and image processing methods are proposed that solve problems that are critical to the success of CT colonography; a non-invasive method to find the precursors of colon cancer. (1) A new optimal display mode was created and (2) the segmentation of the colon is enhanced.","image processing; visualization; endoscopy; colonoscopy","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:7a073190-a809-4236-8ccf-04c39e924511","http://resolver.tudelft.nl/uuid:7a073190-a809-4236-8ccf-04c39e924511","Conditions for stochastic integrability in UMD Banach spaces","Van Neerven, J.M.A.M.; Veraar, M.C.; Weis, L.","","2007","","Stochastic integration in UMD Banach spaces; cylindrical Brownian motion; approximation with elementary processes; y-radonifying operators; vector-valued Besov spaces","en","conference paper","De Gruyter","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:0797113d-e33f-491e-82bd-e23b570903a6","http://resolver.tudelft.nl/uuid:0797113d-e33f-491e-82bd-e23b570903a6","Retrieving reflection responses by crosscorrelating transmission responses from deterministic transient sources: Application to ultrasonic data","Draganov, D.; Wapenaar, K.; Thorbecke, J.; Nishizawa, O.","","2007","By crosscorrelating transmission recordings of acoustic or elastic wave fields at two points, one can retrieve the reflection response between these two points. This technique has previously been applied to measured elastic data using diffuse wave-field recordings. These recordings should be relatively very long. The retrieval can also be achieved by using deterministic transient sources with the advantage of using short recordings, but with the necessity of using many P-wave and S-wave sources. Here, it is shown how reflections were retrieved from the cross correlation of transient ultrasonic transmission data measured on a heterogeneous granite sample.","acoustic signal processing; ultrasonic reflection; ultrasonic transmission","en","journal article","Acoustical Society of America","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:52bf6847-4a7f-4ddb-8582-9d825abad323","http://resolver.tudelft.nl/uuid:52bf6847-4a7f-4ddb-8582-9d825abad323","Crystallographic orientation- and location-controlled Si single grains on an amorphous substrate for large area electronics","He, M.","Beenakker, C.I.M. (promotor)","2007","","excimer laser crystallization; μ -Czochralski (grain filter) process; crystallographic orientation control; thin film transistor","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:83b02afa-1477-4d92-b21e-55b0a38ee1b9","http://resolver.tudelft.nl/uuid:83b02afa-1477-4d92-b21e-55b0a38ee1b9","Supporting conceptual product design by hybrid simulation of use processes with scenario structures","Van der Vegte, W.F.; Horváth, I.","","2007","The approach described in this paper aims to offer designers a new way to investigate use processes of products by integrating scenarios of expected user behaviour with simulations of physical artefact behaviour. Use is considered a key process in the life cycle of a product, being the phase in which an instance of a product is put into service or applied for its purpose. Our approach aims at resolving three issues: (i) dealing with the diversity of use processes in behavioural simulations, (ii) the integration of simulation and modelling approaches and (iii) extending the deployability of behavioural simulations towards conceptual design. Currently, options for behavioural simulation of the use process of a product are limited. Performing complete-picture behavioural simulation in which the product and the human user react on each other’s behaviours is not practicable. To make this possible, a hybrid approach is proposed. Behaviour that is commonly modelled based on the laws of physics is simulated as continuous behaviour, while information-processing behaviour is simulated as discrete behaviour. This paper elaborates on modelling and simulation of discrete behaviour and linking it to continuous-behaviour simulation. Scenario structures are introduced to represent knowledge about different human decision-making patterns that influence the courses of a use process. Depending on what is available to the designer, these can be based on observations from real users or on conjecture. The objective is to make what-if studies possible to compare different scenarios of product use. This is demonstrated with a pilot study of a basic use process.","Designing for use; scenarios; hybrid simulation; state machines; conceptual design; diversity of use processes; human decision-making; human-artefact interaction","en","conference paper","The Design Society","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:15d44747-97cf-4bf8-9071-2800da16f1df","http://resolver.tudelft.nl/uuid:15d44747-97cf-4bf8-9071-2800da16f1df","Agent-Based Control of Distributed Electricity Generation with Micro Combined Heat and Power: Cross-Sectoral Learning for Process and Infrastructure Engineers","Van Dam, K.H.; Houwing, M.; Lukszo, Z.; Bouwmans, I.","","2007","For the distributed control of an electricity infrastructure incorporating clusters of residential combined heat and power units (micro-CHP or ?CHP) a Multi-Agent System approach is considered. The network formed by households generating electricity with ?CHP units and the facilitating energy supplier can be regarded as an electricity production system, analogous to a (flexible) manufacturing system. Next, the system boundary is extended by allowing the trade of electricity between networks of households and their supplier. A methodology for designing an agent-based system for manufacturing control is applied to both cases, resulting in a conceptual design for a control system for the energy infrastructure. Because of the analogy between production systems and infrastructures Process Systems Engineering (PSE) approaches for optimisation and control can be applied to infrastructure system operations. At the same time we believe research on socio-technical infrastructure systems will be a valuable contribution to PSE management strategies.","micro-CHP; multi-agent system; process control; distributed generation; virtual power plant","en","journal article","Elsevier","","","","","","","","Technology, Policy and Management","Department of Energy and Industry","","","",""
"uuid:443e01d7-1c75-4e3d-9c65-4893eb5a8d88","http://resolver.tudelft.nl/uuid:443e01d7-1c75-4e3d-9c65-4893eb5a8d88","Distribution of Global Measures of Deviation Between the Empirical Distribution Function and Its Concave Majorant","Kulikov, V.N.; Lopuhaä, H.P.","","2007","We investigate the distribution of some global measures of deviation between the empirical distribution function and its least concave majorant. In the case that the underlying distribution has a strictly decreasing density, we prove asymptotic normality for several L k -type distances. In the case of a uniform distribution, we also establish their limit distribution together with that of the supremum distance. It turns out that in the uniform case, the measures of deviation are of greater order and their limit distributions are different.","Empirical process; Least concave majorant; Central limit theorem; Brownian motion with parabolic drift; L k distance","en","journal article","Springer","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:58f90d42-ed80-427f-81e0-439deea4b9f7","http://resolver.tudelft.nl/uuid:58f90d42-ed80-427f-81e0-439deea4b9f7","Context Knowledge: Supporting Designers' Information Search in the Early Design Phases","Jambak, M.I.","Badke-Schaub, P. (promotor)","2007","Large amounts of data, information and knowledge are used during a design process. And it is proven from past researches that the absence of context is one of the reasons of the difficulties of utilizing data, information and knowledge in a design. This research work explored the concept of context that has been used in various domains and applications, and searched for the role of context in a design process. The concept of context understood and developed in this study then, was implemented in a system called Contextual Design Information Retrieval System (CDIRS), that aimed at supporting designers' information search in the early design phases. This system was successfully evaluated by potential end users and by experts.","context knowledge; information retrieval; design process; early design phases; knowledge management","en","doctoral thesis","","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:72d4ec8b-997e-49e5-8cf3-53a2fe6a038d","http://resolver.tudelft.nl/uuid:72d4ec8b-997e-49e5-8cf3-53a2fe6a038d","Self-sustained high-temperature reactions: Initiation, propagation and synthesis","Martinez Pacheco, M.","Katgerman, L. (promotor)","2007","Self-Propagating High-Temperature Synthesis (SHS), also called combustion synthesis is an exothermic and self-sustained reaction between the constituents, which has assumed significance for the production of ceramics and ceramic-metallic materials (cermets), because it is a very rapid processing technique without the need of complex furnaces. However, one of the drawbacks of this route is the high porosity of the final product (typically 50%). This implies the need for a subsequent densification stage, e.g. by pressing. Combustion synthesized cermets such as TiC-based graded or TiB2 based cermets can provide a good option for the fabrication of functionally graded material (FGM) components, e.g. to be used in armor applications (TiC based cermets) or for high and medium voltage switchgears (TiB2-based cermets). Self sustained High- Temperature Reactions (SHR) involve the initiation, and the subsequent propagation of a reaction front, the reaction being driven by heat release. Heat release from these reactions is potentially interesting for high temperature welding and brazing operations. Initiation of these reactions can take place by e.g. electrostatic discharge, mechanical (impact and shear) and thermal means. SHR might have applications in semi-conducting bridge and exploding foil initiators. Finally, these reactions might also have industrial relevance for the production of pyrotechnic delays.","combustion synthesis; gasless processes; cermets; kinetics; themites; mics; esd initiation","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:389fad21-46c3-4368-bf75-ea6f7b1f8586","http://resolver.tudelft.nl/uuid:389fad21-46c3-4368-bf75-ea6f7b1f8586","The dynamics of statics","Turhan Taner, M.; Berkhout, A.J.; Treitel, S.; Kelamis, P.G.","","2007","The statics problem, whether short wavelength, long wavelength, residual, or trim, has always been one of the more time-consuming and problematic steps in seismic data processing. We routinely struggle with issues such as poor signal-to-noise (S/N) ratio, cycle skipping, truncated refractors, wavelets with ambiguous first arrival times, etc. Elevation variations create their own problems and impact the choice of datum—floating, phantom or recourse to a zero-velocity layer. Even if we can overcome some of these problems, we still have a “catch 22” situation in which accurate velocity estimation requires good statics, while good statics estimation requires accurate velocities. To characterize these ambiguities, we have come up the oxymoron “time-varying statics.”","geophysical techniques; seismology; seismic waves; geophysical signal processing; statics","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:63881492-9c91-4b92-be6a-d33d1152ac4e","http://resolver.tudelft.nl/uuid:63881492-9c91-4b92-be6a-d33d1152ac4e","Thermoplastic composite wind turbine blades: Vacuum infusion technology for anionic polyamide-6 composites","van Rijswijk, K.","Beukers, A. (promotor); Picken, S.J. (promotor)","2007","Due to the increasing costs of fossil fuels and the improved efficiency of wind turbines in the last decade, wind energy has become increasingly cost-efficient and is well on its way of becoming a mainstream source of energy. To maintain a continuous reduction in costs it is necessary to increase the size of the turbines. For the blades a structural redesign is inevitable and an aircraft-wing-like design consisting of ribs, spars and skins made of thermoplastic composite parts is proposed. Unfortunately, state-of-the-art melt processing of thermoplastic composites requires heavy presses, which makes it impossible to produce large and thick structures like wind turbine blades. As an alternative, this thesis describes the development of reactive processing of thermoplastic composites through vacuum infusion, which is a commonly used technique for manufacturing of thermoset composite wind turbine blades. An anionic polyamide-6 (APA-6) casting resin with water-like viscosity is used to impregnate a stack of ""dry"" glass fiber fabrics, after which in situ polymerization of the semi-crystalline APA-6 matrix takes place within 30 minutes at temperatures around 180°C. The developed technology was successfully applied to infuse 2 to 25 mm thick thermoplastic composites with a fiber volume content of 50%. These APA-6 composites possess outstanding static properties and a promising resistance against fatigue, which is one of the main requirements for wind turbine blade composites. Additional advantages for application in wind energy are the low costs of the resin, the short infusion and curing time, and the fact that APA-6 can be recycled in various ways in an economically sound manner. Increasing the moisture resistance of APA-6 composites is mentioned as most important recommendation for further development.","wind turbine blades; thermoplastic composites; vacuum infusion; anionic polyamide-6; reactive processing","en","doctoral thesis","","","","","","","","","Aerospace Engineering","","","","",""
"uuid:43dae97c-431c-4ed3-8064-35686940a32a","http://resolver.tudelft.nl/uuid:43dae97c-431c-4ed3-8064-35686940a32a","Transport modelling in coastal waters using stochastic differential equations","Charles, W.M.","Heemink, A.W. (promotor)","2007","In this thesis, the particle model that takes into account the short term correlation behaviour of pollutants dispersion has been developed. An efficient particle model for sediment transport has been developed. We have modified the existing particle model by adding extra equations for the suspension using a probabilistic concepts (the Poisson distribution function) to determine the actual number of particles to suspend in each cell. The deposition is modelled by an exponential decaying ordinary differential equation. In order to get accurate results from Monte Carlo simulations of sediment transport, a large number of particles is often needed. However, computation time in a particle model increases linearly with the number of particles. Thus, we have developed a high performance particle model for sediment transport by considering three different sediment suspension methods. Parallel simulation experiments are performed in order to investigate the efficiency of these three methods. We conclude that the second method is the best method on distributed computing systems (e.g., a Beowulf cluster), whereas the third maintains the best load distribution. Using variable time stepping to integrate the particle track in this thesis, has also proved to be efficient.","Wiener process; dispersion coefficient; coloured noise forces; stochastic differential equation; lagrangian particle model; pollution; sediment transport; parallel processing; speed up; load balance; efficiency","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:481c1c95-b814-4956-99f2-fa850d81e498","http://resolver.tudelft.nl/uuid:481c1c95-b814-4956-99f2-fa850d81e498","Sub-10 nm structures written in ultra-thin HSQ resist layers, using Electron Beam Lithography","Grigorescu, A.E.; Van der Krogt, M.; Hagen, C.W.","","2007","Isolated dots and lines with 6 nm width were written in 20 nm thick Hydrogen silsesquioxane (HSQ) layers on silicon substrates, using 100 keV electron beam lithography. The main factors that might limit the resolution, i.e. beam size, writing strategy, resist material, electron dose, development process, are discussed. We demonstrate that, by adjusting the development process, a very high resolution can be obtained. We report the achievement of 7 nm lines at a 20 nm pitch written in a 10 nm thick HSQ layer, using a KOH-based developer instead of a classical TMAH developer. This is the smallest pitch achieved to date using HSQ resist. We think that the resolution can be improved further, and is presently limited by either the beam diameter (which was not measured separately) or by the not fully optimized development process.","high resolution; electron beam resist; HSQ; electron beam nano-lithography; development process","en","conference paper","SPIE","","","","","","","","Applied Sciences","Kavli Institute of Nanoscience","","","",""
"uuid:13116c7c-9a41-46b9-b35e-ce9ceebea15c","http://resolver.tudelft.nl/uuid:13116c7c-9a41-46b9-b35e-ce9ceebea15c","New instruments for dynamic building-construction: Computer as partner in construction","van Rees, R.","de Ridder, H.A.J. (promotor); Saiyildiz, I.S. (promotor)","2007","For dynamic processes to become possible in Building-Construction, ICT support is needed. Data access and data exchange needs to be ubiquitous. Existing research often only targets elaborate international standards (that don't really get off the ground because of international differences) or it targets big, elaborate, expensive systems (that are only available to the top 1% of the companies). This thesis proposes an open source Building-Construction Ontology Web (bcoWeb) coupled with the so-called REST style of web services. Open source to maximise the possibility of participation and to limit the dependency on organisations or individual companies. Simple REST web interaction to keep the complexity low without sacrificing functionality. An ontology web consisting of multiple independent (national) ontologies to facilitate meaningful information exchange without first needing to build The One Big Ontology (that will never be completed).","ontology; building; construction; web; building processes","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:7073ec8e-32ab-4491-8dea-66191f0e190d","http://resolver.tudelft.nl/uuid:7073ec8e-32ab-4491-8dea-66191f0e190d","Digital metering of power components according to IEEE Standard 1459-2000 using the Newton-type algorithm","Popov, M.; Van der Sluis, L.; Terzija, V.V.; Stanojevic, V.","","2007","","IEEE Standard 1459-2000; nonlinear estimation; power measurement; power systems; transient processes","en","journal article","Institute of Electrical and Electronics Engineers IEEE","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:948a633f-d6c5-42f2-ad06-c9436f10d30d","http://resolver.tudelft.nl/uuid:948a633f-d6c5-42f2-ad06-c9436f10d30d","Issues in the design of facilitated collaboration processes","Kolfschoten, G.L.; Den Hengst-Bruggeling, M.; De Vreede, G.J.","","2007","","facilitation - facilitation techniques -collaboration process design; Collaboration Engineering; design and preparation; Group Support Systems","en","journal article","Springer","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:d6948ecc-225e-49c8-aad6-50ec67cbf76e","http://resolver.tudelft.nl/uuid:d6948ecc-225e-49c8-aad6-50ec67cbf76e","An adaptive sampling and windowing interrogation method in PIV","Theunissen, R.; Scarano, F.; Riethmuller, M.L.","","2007","","PIV; image processing; adaptive interrogation; spatial resolution; aircraft wake vortex; shock-wave¿boundary layer interaction","en","journal article","IOP","","","","","","","","Aerospace Engineering","","","","",""
"uuid:eb345380-ebd5-447c-bc50-fd8046da25ee","http://resolver.tudelft.nl/uuid:eb345380-ebd5-447c-bc50-fd8046da25ee","Coupled fluid-flow and magnetic-field simulation of the Riga dynamo experiment","Kenjere, S.; Hanjali?, K.; Renaudier, S.; Stefani, F.; Gerbeth, G.; Gailitis, A.","","2006","Magnetic fields of planets, stars, and galaxies result from self-excitation in moving electroconducting fluids, also known as the dynamo effect. This phenomenon was recently experimentally confirmed in the Riga dynamo experiment [ A. Gailitis et al., Phys. Rev. Lett. 84, 4365 (2000) ; A. Gailitis et al., Physics of Plasmas 11, 2838 (2004) ], consisting of a helical motion of sodium in a long pipe followed by a straight backflow in a surrounding annular passage, which provided adequate conditions for magnetic-field self-excitation. In this paper, a first attempt to simulate computationally the Riga experiment is reported. The velocity and turbulence fields are modeled by a finite-volume Navier-Stokes solver using a Reynolds-averaged-Navier-Stokes turbulence model. The magnetic field is computed by an Adams-Bashforth finite-difference solver. The coupling of the two computational codes, although performed sequentially, provides an improved understanding of the interaction between the fluid velocity and magnetic fields in the saturation regime of the Riga dynamo experiment under realistic working conditions.","plasma magnetohydrodynamics; plasma simulation; plasma transport processes; plasma turbulence; Navier-Stokes equations; finite volume methods; finite difference methods","en","journal article","American Institute of Physics","","","","","","","","Applied Sciences","Multi-Scale Physics","","","",""
"uuid:d60e1d20-978d-44ce-8199-a696021714a9","http://resolver.tudelft.nl/uuid:d60e1d20-978d-44ce-8199-a696021714a9","A monthly interception equation based on the statistical characteristics of daily rainfall","De Groen, M.M.; Savenije, H.H.G.","","2006","This paper presents a simple analytical equation for monthly interception on the basis of the combination of a daily threshold model with the probability distribution of daily rainfall. In this paper, interception has a wider definition than merely canopy interception. It is the part of the rainfall that evaporates after it has been stored on the wetted surface, which includes the canopy, the understory, the bottom vegetation, the litter layer, the soil, and the hard surface. Interception is defined as the process of evaporation from intercepted rainfall. It is shown that this process has a typical timescale of 1 day. Monthly interception models can be improved by taking the statistical characteristics of daily rainfall into account. These characteristics appear to be less variable in space than the rainfall itself. With the statistical characteristics of daily rainfall obtained at a few locations where reliable records are available (for example, airports) monthly models can be improved and applied to larger areas (20–200 km). The equation can be regionalized, making use of the Markov property of daily rainfall. The equation obtained for monthly interception is similar to Budyko's curve.","interception process; ungauged basins; Markov chains; water resources model; Budyko's curve","en","journal article","American Geophysical Union","","","","","","","","Civil Engineering and Geosciences","Water Management","","","",""
"uuid:42f4db62-ef27-4306-bf7f-d7be26a4f6b1","http://resolver.tudelft.nl/uuid:42f4db62-ef27-4306-bf7f-d7be26a4f6b1","Pervaporation and vapour permeation of methanol and MTBE through a microporous methylated silica membrane","de Bruijn, F.T.","Jansens, P.J. (promotor); Kapteijn, F. (promotor)","2006","The combination of conventional unit operations with pervaporation or vapour permeation membrane separation processes offers opportunities for process intensification in terms of augmenting capacity and decreasing energy consumption of conventional unit operations. The MTBE production process is an often studied example of a so-called hybrid process in which distillation is combined with pervaporation or vapour permeation. In this work transport of pure methanol through and separation of methanol from MTBE by a supported microporous methylated membrane (developed by the Energy research Centre of the Netherlands) is studied. Several aspects of modelling of transport through the support layers and the selective layer are addressed, thereby comparing the Maxwell-Stefan equations for pure methanol transport with a practical engineering model. From experiments performed at temperatures up to 140°C it appeared that both the selectivity towards methanol and flux of the membrane are high. The thesis ends with a study comparing pervaporation and vapour permeation on laboratory scale as well as on large scale by simulations.","pervaporation; vapour permeation; hybrid processes; membrane separation; (methylated) silica; transport modelling; maxwell-stefan","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:bd7ab4e8-602d-4b24-a9ce-63313e47e6b0","http://resolver.tudelft.nl/uuid:bd7ab4e8-602d-4b24-a9ce-63313e47e6b0","Ultra low-power biomedical signal processing: An analog wavelet filter approach for pacemakers","Pavlík Haddad, S.A.","Long, J.R. (promotor)","2006","The purpose of this thesis is to describe novel signal processing methodologies and analog integrated circuit techniques for low-power biomedical systems. Physiological signals, such as the electrocardiogram (ECG), the electroencephalogram (EEG) and the electromyogram (EMG) are mostly non-stationary. The main difficulty in dealing with biomedical signal processing is that the information of interest is often a combination of features that are well localized temporally (e.g., spikes) and others that are more diffuse (e.g., small oscillations). This requires the use of analysis methods sufficiently versatile to handle events that can be at opposite extremes in terms of their time-frequency localization. Wavelet Transform (WT) has been extensively used in biomedical signal processing, mainly due to the versatility of the wavelet tools. The WT has been shown to be a very efficient tool for local analysis of nonstationary and fast transient signals due to its good estimation of time and frequency (scale) localizations. Being a multiscale analysis technique, it offers the possibility of selective noise filtering and reliable parameter estimation. Signal analysis methods derived from wavelet analysis carry large potential to support a wide range of biomedical signal processing applications including noise reduction, feature recognition and signal compression. The discussion here deals with wavelet techniques for cardiac signals analysis. Often WT systems employ the discrete wavelet transform, implemented on a digital signal processor. However, in ultra low-power applications such as biomedical implantable devices, it is not suitable to implement the WT by means of digital circuitry due to the relatively high power consumption associated with the required A/D converter. Low-power analog realization of the wavelet transform enables its application in vivo, e.g. in pacemakers, where the wavelet transform provides a means to extremely reliable cardiac signal detection. In this thesis we present a novel method for implementing signal processing based on WT in an analog way. The methodology presented focuses on the development of ultra low-power analog integrated circuits that implement the required signal processing, taking into account the limitations imposed by an implantable device.","biomedical systems; pacemakers; wavelet transform; analog signal processing; analog wavelet filters; low-power analog integrators; translinear circuits; log-domain filters; gmc filters; class ab sinh integrators; analog integrated circuits; electronics","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:3535a2df-28bc-4d2e-8a0b-5806ae117f0a","http://resolver.tudelft.nl/uuid:3535a2df-28bc-4d2e-8a0b-5806ae117f0a","Combined reactions and separations using ionic liquids and carbon dioxide","Kroon, M.C.","Witkamp, G.J. (promotor); Peters, C.J. (promotor)","2006","A new and general type of process for the chemical industry is presented using ionic liquids and supercritical carbon dioxide as combined reaction and separation media. In this process, the carbon dioxide pressure controls the miscibility of reactants, products, catalyst and ionic liquid, enabling fast atom-efficient reactions in a homogenous phase as well as instantaneous product recovery in a biphasic system. High reaction and separation rates can be achieved compared with the conventional fully biphasic alternative. Experimental and theoretical methods are used to find the operating conditions of the new approach. When the ionic liquid/carbon dioxide process is applied to the production of 1600 ton/year Levodopa, a medicine against Parkinsonian disease, the energy consumption is reduced with 20,000 GJ per year and the waste generation is reduced with 4800 ton of methanol per year and 480 kg catalyst per year, resulting in a decrease in total operational costs of 11.3 million euros per year. Therefore, from an economical and an environmental point of view, fast implementation of the new process-set up is desired. Suggestions for the fastest implementation are made based on the cyclic innovation model.","ionic liquid; carbon dioxide; phase behavior; novel process set-up; reaction and separation media; modeling of operating conditions; limits to operating conditions; economic and environmental evaluation; industrial implementation; cyclic innovation model","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:1fb54efc-bfbd-46b3-82ff-fb848bf6e5c9","http://resolver.tudelft.nl/uuid:1fb54efc-bfbd-46b3-82ff-fb848bf6e5c9","Vrachten van stoffen naar het Waddenzeegebied: Deelrapport 2 van 2: inventarisatie gegevens, vrachtberekening en analyse","Arentz, L.; Boon, J.G.","","2006","","stochastische processen; stochastic processes; stoftransport; mass transport; Waddenzee","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:bb943d51-d5e7-409d-916a-c08c8b928263","http://resolver.tudelft.nl/uuid:bb943d51-d5e7-409d-916a-c08c8b928263","Vrachten van stoffen naar het Waddenzeegebied: Deelrapport 1 van 2: methodiek van de vrachtberekening","Arentz, L.; Boon, J.G.; Gils, J.A.G. van; Boogaard, H.F.P. van den","","2006","","stochastische processen; stochastic processes; stoftransport; mass transport; Waddenzee","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:843020de-2248-468a-bf19-15b4447b5bce","http://resolver.tudelft.nl/uuid:843020de-2248-468a-bf19-15b4447b5bce","Organization structures for dealing with complexity","Meijer, B.R.","Bikker, H. (promotor); Tomiyama, T. (promotor)","2006","""Complexity is in the eye of the beholder"" is a well known quote in the research field of complexity. In the world of managers the word complex is often a synonym for difficult, complicated, involving many factors and highly uncertain. A complex business decision requires careful preparation and close attention of the managers and workers involved. This preparation often reduces the uncertainty or reveals the structure of the problems and processes to be dealt with. A complex problem becomes less complex for those involved in solving it. This is the eye of the beholder. An experienced eye will perceive a different level of complexity than an inexperienced. However we cannot say, less complexity for the experienced, more for the inexperienced. Inexperienced observers may overlook, thus underestimate complexity. The goal of this thesis is to show that structure is one of the most important and often overlooked design variables in solving complex business management problems. In this thesis a design procedure and rules for designing business processes and their structure decisions will be presented.","business processes; organization design; organization structure; logistics; development; innovation; complexity","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:e80d55d3-bece-4665-b932-374960ce960c","http://resolver.tudelft.nl/uuid:e80d55d3-bece-4665-b932-374960ce960c","Progressive visualization of incomplete sonar-data sets: From sea-bottom interpolation and segmentation to geometry extraction","Loke, R.E.","Jansen, F.W. (promotor); du Buf, J.M.H. (promotor)","2006","This thesis describes a visualization pipeline for sonar profiling data that show reflections of multiple sediments in the sea bottom and that cover huge survey areas with many gaps. Visualizing such data is not trivial, because they may be noisy and because data sets may be very large. The developed techniques are: (1) Quadtree interpolation for estimating new sediment reflections, at all gaps in the longitude-latitude plane. The quadtree is used for guiding the 3D interpolation process: gaps become small at low spatial resolutions, where they can be filled by interpolating between available reflections. In the interpolation, the reflection data are cross correlated in order to construct continuity of multiple, sloping reflections. (2) Segmentation and boundary refinement in an octree in order to detect sediments in the sonar data. In the refinement, coarse boundaries are reclassified by filtering the data with a planar kernel that is positioned on the boundary between the sediments. This improves existing algorithms and implies that gaps can also be interpolated during the down projection in the octree. (3) Triangulation conform a new version of the Discretized Marching Cubes algorithm that improves the sharpness of the extracted surfaces that lay between the sediments. By combining different surface modeling variants on the high-resolution subgrid of a cuberille, sharp manifold surfaces can be generated, in order to preserve concave and convex sediment shapes. (4) Integration of the techniques in a single-octree framework in order to make it scalable and applicable for the visualization of large data sets. The visualization pipeline has been applied for interactive visualization at low and high spatial resolutions.","computer graphics; pattern recognition; image processing; oceanic engineering","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:a3b3a766-c921-4ac9-8262-76ce9b68cf6f","http://resolver.tudelft.nl/uuid:a3b3a766-c921-4ac9-8262-76ce9b68cf6f","Focal transformation, an imaging concept for signal restoration and noise removal","Berkhout, A.J.; Verschuur, D.J.","","2006","Interpolation of data beyond aliasing limits and removal of noise that occurs within the seismic bandwidth are still important problems in seismic processing. The focal transform is introduced as a promising tool in data interpolation and noise removal, allowing the incorporation of macroinformation about the involved wavefields. From a physical point of view, the principal action of the forward focal operator is removing the spatial phase of the signal content from the input data, and the inverse focal operator restores what the forward operator has removed. The strength of the method is that in the transformed domain, the focused signals at the focal area can be separated from the dispersed noise away from the focal area. Applications of particular interest in preprocessing are interpolation of missing offsets and reconstruction of signal beyond aliasing. The latter can be seen as the removal of aliasing noise.","geophysical signal processing; signal reconstruction; signal restoration; imaging; seismology; interference suppression","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:b1fd0675-130c-437b-8f33-915c8f4daa08","http://resolver.tudelft.nl/uuid:b1fd0675-130c-437b-8f33-915c8f4daa08","Shape Optimization of Axisymmetric Ejector","Dvorak, V.","","2006","This contribution deals with aerodynamic optimization using dynamic mesh method provided by Fluent software. A method of transformation of coordinates and an optimization procedure enabling to obtain arbitrary shape of computing mesh are described. This method was developed for shape optimization of axisymmetric ejector in order to obtain the highest efficiency of the whole device. A result of using this method is a series of ejector shapes for different relative back pressure values. Resulting values of efficiency of optimized ejectors, optimal area ratios, optimal ejection ratios and velocity ratios are presented. An analysis of mixing in optimized ejector is carried out.","ejector; mixing process; shape optimization; dynamic mesh","en","conference paper","","","","","","","","","","","","","",""
"uuid:85b14457-8770-463a-af0c-ff2243ac1b01","http://resolver.tudelft.nl/uuid:85b14457-8770-463a-af0c-ff2243ac1b01","Seismic processing in the inverse data space","Berkhout, A.J.","","2006","Until now, seismic processing has been carried out by applying inverse filters in the forward data space. Because the acquired data of a seismic survey is always discrete, seismic measurements in the forward data space can be arranged conveniently in a data matrix (P). Each column in the data matrix represents one shot record. If we represent seismic data in the temporal frequency domain, then each matrix element consists of a complex-valued number. Considering the dominant role of multiple scattering in seismic data, it is proposed to replace data matrix P by its inverse P–1 before starting seismic processing. Making use of the feedback model for seismic data, multiple scattered energy is mapped onto the zero time axis of the inverse data space. The practical consequence of this remarkable property may be significant: multiple elimination in the inverse data space simplifies to removing data at zero time only. Moving to the inverse data space may cause a fundamental change in the way we preprocess and image seismic data.","seismology; inverse problems; geophysical techniques; geophysical signal processing; matrix inversion","en","journal article","Society of Exploration Geophysicists","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:599539d9-746a-4a78-a0ac-d2eb2c7921e9","http://resolver.tudelft.nl/uuid:599539d9-746a-4a78-a0ac-d2eb2c7921e9","Analysis of coupled mass transfer and sol-gel reaction in a two-phase system","Castelijns, H.J.; Huinink, H.P.; Pel, L.; Zitha, P.L.J.","","2006","The coupled mass transfer and chemical reactions of a gel-forming compound in a two-phase system were studied in detail. Tetra-methyl-ortho-silicate (TMOS) is often used as a precursor in sol-gel chemistry to produce silica gels in aqueous systems. TMOS can also be mixed with many hydrocarbons without chemical reaction, which allows for various applications in multiphase systems. In this study, TMOS was mixed with n-hexadecane and placed together with water in small cylinders. Upon contact of the mixture with the water, TMOS transfers completely to the aqueous phase where it forms a gel through a heterogeneous reaction. Nuclear magnetic resonance imaging and relaxation time measurements were employed to monitor the mass transfer of TMOS from the oleic to the aqueous phase. The longitudinal relaxation time (T1) was calibrated and used to determine the concentration of TMOS in n-hexadecane during the transfer. The mass transfer rate was obtained at various temperatures (25–45?°C) and for several initial concentrations of TMOS. In the aqueous phase a sharp decrease in the transversal relaxation time (T2) is observed which is attributed to the gel reaction, in particular the formation of methanol in the initial stage. The minimum in T2 indicates the gelation point, and was found to be strongly dependent on temperature and concentration.","silicon compounds; chemical reactions; organic compounds; mass transfer; sol-gel processing; gels; NMR imaging; nuclear spin-lattice relaxation; spin-spin relaxation","en","journal article","American Institute of Physics","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:5c747130-6773-493e-91f8-c6367b368fd2","http://resolver.tudelft.nl/uuid:5c747130-6773-493e-91f8-c6367b368fd2","Second class particles and cube root asymptotics for Hammersley’s process","Cator, E.A.; Groeneboom, P.","","2006","We show that, for a stationary version of Hammersley’s process, with Poisson sources on the positive x-axis and Poisson sinks on the positive y-axis, the variance of the length of a longest weakly North–East path L(t, t) from (0, 0) to (t, t) is equal to 2E(t ? X(t))+, where X(t) is the location of a second class particle at time t . This implies that both E(t ?X(t))+ and the variance of L(t, t) are of order t2/3. Proofs are based on the relation between the flux and the path of a second class particle, continuing the approach of Cator and Groeneboom [Ann. Probab. 33 (2005) 879–903].","longest increasing subsequence; Ulams problem; Hammersleys process; cube root convergence; second class particles; Burkes theorem","en","journal article","Institute of Mathematical Statistics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:83596e10-18ce-4596-9b4f-8eea22d32ab4","http://resolver.tudelft.nl/uuid:83596e10-18ce-4596-9b4f-8eea22d32ab4","The recognition of emotions from speech using GentleBoost classifier: A comparison approach","Datcu, D.; Rothkrantz, L.J.M.","","2006","The recognition of the internal emotional state of one person plays an important role in several human-related fields. Among them, human-computer interaction has recently received special attention. The current research is aimed at the analysis of segmentation methods and of the performance of the GentleBoost classifier on emotion recognition from speech. The data set used for emotion analysis is Berlin - a database of German emotional speech. A second data set is DES – Danish Emotional Speech data set is used for comparison purposes. Our contribution for the research community consists in a novel extensive study on the efficiency of using distinct numbers of frames per speech utterance for emotion recognition. Eventually, a set of GentleBoost 'committees' with optimal classification rates is determined based on an exhaustive study on the generated classifiers and on different types of segmentation.","emotion recognition; speech processing; GentleBoost; Human-Computer Interfaces","en","conference paper","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","","",""
"uuid:bf9cc466-c1e6-4092-8656-05b533c7085b","http://resolver.tudelft.nl/uuid:bf9cc466-c1e6-4092-8656-05b533c7085b","Adaptive motion compensation in sonar array processing","Groen, J.","Gisolf, A. (promotor); Simons, D.G. (promotor)","2006","In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine hunting motion of the sonar needs to be accounted for. For towed anti-submarine warfare sonar, beamforming methods are developed for the port/starboard (PS) discrimination problem and for Doppler compensation. In mine hunting, synthetic aperture sonar (SAS) is a promising technique to improve sonar performance by combining of multiple pings. Efficient imaging techniques with adequate motion compensa-tion are examined to control the data flow. Each processing method is implemented, tested and assessed. For the PS discrimination prob-lem three beamformers are investigated. These beamformers and the Doppler compensation methods are investigated theoretically, with simulations and with datasets recorded at sea. For mine hunting a complete SAS processing chain with motion estimation, motion compensation and imaging is developed. It is tested on simulations and on five independent experimental datasets. The processing techniques proposed substantially improve sonar performance. Triplet tech-nology is a successful solution for the PS discrimination problem. The ambiguous direction is sufficiently suppressed with the beamformers proposed. The requirements for Doppler com-pensation are derived analytically and are met without complicated adaptations. The SAS re-search showed that for the cases considered, wavenumber frequency imaging is preferred when using adequate motion compensation. A key-finding while comparing the imaging algo-rithms was enhancement of an important sonar classification clue, the acoustic shadow.","underwater acoustics; beamforming; synthetic aperture sonar; signal processing; simulation; imaging","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:86fb319e-07b6-4389-91eb-ceba89ef5e8d","http://resolver.tudelft.nl/uuid:86fb319e-07b6-4389-91eb-ceba89ef5e8d","Resolution of coherent and incoherent imaging systems reconsidered: Classical criteria and a statistical alternative","Van Aert, S.; Van Dyck, D.; Den Dekker, A.J.","","2006","The resolution of coherent and incoherent imaging systems is usually evaluated in terms of classical resolution criteria, such as Rayleigh’s. Based on these criteria, incoherent imaging is generally concluded to be ‘better’ than coherent imaging. However, this paper reveals some misconceptions in the application of the classical criteria, which may lead to wrong conclusions. Furthermore, it is shown that classical resolution criteria are no longer appropriate if images are interpreted quantitatively instead of qualitatively. Then one needs an alternative criterion to compare coherent and incoherent imaging systems objectively. Such a criterion, which relates resolution to statistical measurement precision, is proposed in this paper. It is applied in the field of electron microscopy, where the question whether coherent high resolution transmission electron microscopy (HRTEM) or incoherent annular dark field scanning transmission electron microscopy (ADF STEM) is preferable has been an issue of considerable debate.","general physics; probability theory; stochastic processes; statistics; coherence; noise in imaging systems; resolution","en","journal article","Optical Society of America","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:8049c0cf-e811-445f-bd69-4a00a6820c82","http://resolver.tudelft.nl/uuid:8049c0cf-e811-445f-bd69-4a00a6820c82","Rapid enterprise design","Mulder, J.B.F.","Dietz, J.L.G. (promotor)","2006","Existing methods for redesigning organizations are often not capable of meeting the required rate of change. This applies in particular to development methods for IT applications: the average automation procedure takes around two years to implement. Therefore, there is an urgent need for methods that make it possible to redesign and restructure organizations, preferably in an integral manner, within a few months. This demands a fundamentally different (scientifically grounded) method that can precisely specify the necessary interaction between the organization, communication, and information (systems). In 1996, this idea formed the impulse for the quest for a method of Rapid Design that could embrace the entire Enterprise. This research describes a ten-year period oriented toward the design of organizations by means of Design & Engineering Methodology for Organizations (DEMO). The study has been assigned the name Rapid Enterprise Design, and cover the rapid design of an organizationâs business functions, business processes, structure, and information provision. In short, the study deals with the issue of whether or not DEMO is an adequate method for the design of both large and small organizations. Investigation was also performed on ways in which DEMO could be further supplemented with a project management method so that it could justifiably be regarded as a completely formal method. For the study, the Action Research method was chosen: a method in which research is performed in stages and research questions are formulated for each stage on the basis of the results of the previous stage. Twenty-eight projects were implemented using DEMO, three of which are described as case studies. In short, the conclusion of the research is that DEMO, supplemented by a project management approach, is an excellent method for the rapid (re)design of both small and large organizations.","organization; business process; information system; organizational structure","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:0d7d5fd0-3d6c-48f0-8266-b04be9ed1010","http://resolver.tudelft.nl/uuid:0d7d5fd0-3d6c-48f0-8266-b04be9ed1010","A Conceptual Foundation of the ThinkLet Concept for Collaboration Engineering","Kolfschoten, G.L.; Briggs, R.O.; De Vreede, G.J.; Jacobs, P.H.M.; Appelman, J.H.","","2006","Organizations increasingly use collaborative teams in order to create value for their stakeholders. This trend has given rise to a new research field: Collaboration Engineering. The goal of Collaboration Engineering is to design and deploy processes for high-value recurring collaborative tasks, and to design these processes such that practitioners can execute them successfully without the intervention of professional facilitators. One of the key concepts in Collaboration Engineering is the thinkLet – a codified facilitation technique that creates a predictable pattern of collaboration. Because thinkLets produce a predictable pattern of interactions among people working together toward a goal they can be used as snap-together building blocks for team process designs. This paper presents an analysis of the thinkLet concept and proposes a conceptual object model of a thinkLet that may inform further developments in Collaboration Engineering.","collaboration engineering; thinkLets; collaboration; object oriented modeling; collaboration process design; facilitation; group support systems","en","journal article","Elsevier","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","",""
"uuid:2d495291-9725-4c1d-a61e-b909d857b0be","http://resolver.tudelft.nl/uuid:2d495291-9725-4c1d-a61e-b909d857b0be","Get synchronized: Bridging the gap between design and volume production","Smulders, F.E.H.M.","Buijs, J.A. (promotor); Dorst, C.H. (promotor)","2006","The interface between Design and Manufacturing forms a locus of frequent interpersonal conflict. Misunderstandings, unwelcome surprises and planning problems are the rule rather than the exception. Within companies that deliver consumer goods in large quantities to the market this interface is also the transition from exploration (seeking new business opportunities) to exploitation (profiting from those consumer products). This thesis reports on a first exploration of the Design-Manufacturing interface on the level of the participants from both processes using the method of Grounded Theory. This book conceptually describes how these actors bridge the gap between Design and Volume Production and portrays their social process in detail. The insights presented here are to be seen as a social-interactive perspective on the process of product innovation and are complementary to the rational-analytic viewpoint that focuses on the material and tangibility of product and process. The kind of research that this book presents reflects the increased attention of academic researchers towards the human dimension of the product innovation process. Over the last decade the focus of design researchers has widened from individual designers, via teams of designers towards design teams in corporate settings. This movement increasingly views design as a social process which connects the engineering sciences with the social sciences.","product innovation; social process; collaboration; boundary crossing; exploration; exploitation; mental models; design-manufacturing interface; grounded theory","en","doctoral thesis","","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:6f817969-cbd6-4c00-b55d-cec92cfd89ce","http://resolver.tudelft.nl/uuid:6f817969-cbd6-4c00-b55d-cec92cfd89ce","Design of a process to generate power from natural gas, using catalyzed decomposition and fuel cells","Van de Lindeloof, A.M.; Brouérius van Nidek, V.; Van der Neut, A.G.; Kuijvenhoven, K.; Sehmidt, M.P.","","2006","","Power generation; Methane decomposition; Fuel cells; DDM; SOFC; DCFC; Process intensification","en","report","Delft University of technology","","","","","","","2016-01-17","Applied Sciences","DelftChemTech","","","",""
"uuid:a572f1bc-af2a-4d83-8256-4ac904c10db4","http://resolver.tudelft.nl/uuid:a572f1bc-af2a-4d83-8256-4ac904c10db4","Contextual awareness in mobile information processing","Koutamanis, A.","","2006","","Mobile information processing; context; representation; information systems","en","conference paper","","","","","","","","","Architecture","","","","",""
"uuid:b6ae4f9e-89c5-4d55-87d8-aeba045b8cb3","http://resolver.tudelft.nl/uuid:b6ae4f9e-89c5-4d55-87d8-aeba045b8cb3","Knowledge driven facial modelling","Wojdel, A.W.","Koppelaar, H. (promotor); Rothkrantz, L.J.M. (promotor)","2005","This research aims at supporting users if not involved in computer graphics, facial physiology, or psychology and in need of generating realistic facial animations. Realism is to be understood in terms of the visual appeal of a single rendered image and focused on believable behaviour of the animated face. Our goal is to develop a system enabling semi-automatic facial animation, allowing an average user to generate facial animation in a simple manner. A system with knowledge about the communicative functions of facial expressions that would support an average user to generate facial animation valid from a psychological and physiological point of view.","facial animation; knowledge extraction; image processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","http://resolver.tudelft.nl/uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","Model reduction for dynamic real-time optimization of chemical processes","Van den Berg, J.","Bosgra, O.H. (promotor)","2005","The value of models in process industries becomes apparent in practice and literature where numerous successful applications are reported. Process models are being used for optimal plant design, simulation studies, for off-line and online process optimization. For online optimization applications the computational load is a limiting factor. The focus of this thesis is on nonlinear model approximation techniques aiming at reduction of computational load of a dynamic real-time optimization problem. Two types of model approximation methods were selected from literature and assessed within a dynamic optimization case study: model reduction by projection and physics-based model reduction. Model order reduction by projection is partially successful. Even with a strongly reduced number of transformed differential equations it is possible to compute acceptable approximate solutions. Projection does not provide predictable results in terms of simulation error and stability and does not reduce the computational load of simulation. On the other hand, physics-based model reduction appeared to be very successful in reducing the computational load of the sequential dynamic optimization problem.","chemical processes; model reduction; optimization","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:0056229a-0e45-48ca-b9b8-be6dcadfc5db","http://resolver.tudelft.nl/uuid:0056229a-0e45-48ca-b9b8-be6dcadfc5db","Gaining insight into business networks: A simulation based support environment to improve process orchestration","Tewoldeberhan, T.W.","Sol, H.G. (promotor); Verbraeck, A. (promotor)","2005","In today's world, organizations are becoming increasingly interested in using business networks as a means to adapt to the ever-changing environment to increase their performance level. As a result, the focus of efforts to improve the performance of organizations has shifted from organizational level to the inter-organizational level. An important challenge organizations need to meet in a business network is efficient and reliable business process orchestrations with their partners. Limited visibility of business process orchestrations in the network is one reason. In the research presented in this thesis, we examined process orchestration issues in business networks using the US Department of Defence business network as a case study. From the case study we developed requirements for a support environment to improve the efficiency and reliability of process orchestration in business networks. Based on the requirements, we developed a simulation based support environment that can be used to assist organizations to design an efficient and reliable process orchestration. The simulation based support environment consists of a methodology, which guides the process of improving process orchestration, and a software tool, which assists the development of simulation models. The support environment was tested in an experimental setting. Experts in the field evaluated the usefulness of the support environment. The experiments and expert evaluation show that the support environment can be used to support the process of improving the efficiency and reliability of process orchestration in business networks.","business process orchestration; decision support; simulation and modeling; service orientation; business networks","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:677576d1-09bd-4e82-9461-e17a1fce8f62","http://resolver.tudelft.nl/uuid:677576d1-09bd-4e82-9461-e17a1fce8f62","Designing a lean energy process for resin production, by alternative treatment of process water using freeze concentration or direct vapour incineration","Boerman, D.J.; Van Dijk, R.; Van der Heijden, M.; Pickhardt, P.; Vouwzee, S.","","2005","","freeze concentration; FC; direct vapour incineration,; DVI; powder coating resins; energy saving; process water; DSM","en","report","Delft University of Technology","","","","","","","2015-12-13","Applied Sciences","DelftChemTech","","","",""
"uuid:6ab594c2-7650-4c24-a765-f4d661a0a9cb","http://resolver.tudelft.nl/uuid:6ab594c2-7650-4c24-a765-f4d661a0a9cb","Ecophysiological characterization of microbial communities in BioDeNOx","Kumaraswamy, R.","Kuenen, J.G. (promotor); Van Loosdrecht, M.C.M. (promotor)","2005","Nitric oxide (NO) and nitrogen dioxide (NO2) are both important Green house gases, which also cause air pollution. Industrial flue gas emissions are responsible for 17% of NOx/SOx released into the atmosphere. Treatment of flue gas to remove NO and NO2 had started a few decades ago and several chemical and biological processes have been developed to remove NOx from the flue gas BioDeNOx is a biological NOx removal process in which microbial communities are used for the reduction of NO to N2 at elevated temperatures (50 to 55 °C). Many microbiologists and process engineers have tried to use pure or co-cultures of bacteria for this purpose, however so far mixed cultures have not been studied extensively. Recently, the BioDeNOx-process has been proposed and developed for the removal of NO from flue gas by Buisman et al. (1999). In this process the NOx is absorbed in a Fe(II)EDTA2- solution followed by microbial denitrification of the NO in Nitrosyl complex (Fe(II)EDTA.NO2-) to N2. This thesis deals with the microbiology of the BioDeNox process. The diversity, identity and activity of the microorganisms performing the above-mentioned reactions are described. This was achieved by the isolation and characterization of the microorganisms, and by using culture-independent molecular tools for the identification and quantification of unculturable microorganisms. In addition, the relationship between diversity and activity in the BioDeNOx process was studied by operating a labscale reactor in which simultaneous Fe(II)EDTA.NO2- reduction and Fe(III)EDTA- reduction were taking place.","biodenox; microbial processes; waste purification; bacillus azutoformans; nox removal","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:c3fd77aa-1a0f-4a71-92d2-86a722ed1366","http://resolver.tudelft.nl/uuid:c3fd77aa-1a0f-4a71-92d2-86a722ed1366","Retrieving the Green’s function in an open system by cross correlation: A comparison of approaches (L)","Wapenaar, C.P.A.; Fokkema, J.; Snieder, R.","","2005","We compare two approaches for deriving the fact that the Green’s function in an arbitrary inhomogeneous open system can be obtained by cross correlating recordings of the wave field at two positions. One approach is based on physical arguments, exploiting the principle of time-reversal invariance of the acoustic wave equation. The other approach is based on Rayleigh’s reciprocity theorem. Using a unified notation, we show that the result of the time-reversal approach can be obtained as an approximation of the result of the reciprocity approach.","Green's function methods; acoustic wave propagation; acoustic wave scattering; vibrations; structural acoustics; acoustic signal processing; seismology","en","journal article","Acoustical Society of America","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:6b77a054-6b1a-49fb-8551-d652418c4242","http://resolver.tudelft.nl/uuid:6b77a054-6b1a-49fb-8551-d652418c4242","Designing reactive distillation processes with improved efficiency","Almeida-Rivera, C.P.","Grievink, J. (promotor)","2005","In this dissertation a life-span inspired perspective is taken on the conceptual design of grassroots reactive distillation processes. Attention was paid to the economic performance of the process and to potential losses of valuable resources over the process life span. The research was cast in a set of goal-oriented engineering and specific scientific design questions. The scientific novelty of this work is based around four key aspects of reactive distillation process design: (i) the formulation of an extended design problem in reactive distillation achieved by refreshing it in the wider context of process development and engineering and in a more relevant way regarding sustainability; (ii) the definition of an integrated design methodology achieved by analyzing current design methodologies and bridging the gaps between them; while we suggest this methodology as a way to beat the design complexity by decomposition, it requires the mastery of many tools and many concepts; (iii) the improvement of design tools achieved by exploring and extending current techniques and systematically applying them to the reactive distillation case; (iv) the definition of performance criteria that can be used to account for the process performance from a life-span inspired perspective, as well as applications of them.","process systems engineering; reactive distillation; conceptual process design; multiechelon design approach; life-span inspired design methodology","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:d69c2f38-167f-4ee9-95b6-1df013098062","http://resolver.tudelft.nl/uuid:d69c2f38-167f-4ee9-95b6-1df013098062","Modelling the Melting of Post-consumer Scrap within a Rotary Melting Furnace for Aluminium Recycling","Zhou, B.","Reuter, M.A. (promotor)","2005","","secondary aluminium; scrap melting; rotary furnace; computational fluid dynamics (CFD); process modelling; population balance model (PBM); sustainability","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:db5f3d18-ce49-496a-8328-ec9203136949","http://resolver.tudelft.nl/uuid:db5f3d18-ce49-496a-8328-ec9203136949","Techniques and software architectures for medical visualisation and image processing","Botha, C.P.","Jansen, F.W. (promotor); Post, F.H. (promotor)","2005","This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use of visualisation techniques to assist the shoulder replacement process. This motivated the need for a flexible environment within which to test and develop new visualisation and also image processing techniques with a medical focus. The Delft Visualisation and Image processing Development Environment, or DeVIDE, was created to answer this need. DeVIDE is a graphical data-flow application builder that combines visualisation and image processing techniques, supports the rapid creation of new functional components and facilitates a level of interaction with algorithm code and parameters that differentiates it from similar platforms. For visualisation, measurement and pre-operative planning, an accurate segmentation from CT data of the bony structures of the shoulder is required. Due to the complexity of the shoulder joint and the fact that a method was required that could deal with diseased shoulders, existing techniques could not be applied. In this thesis we present a suite of techniques for the segmentation of the skeletal structures from CT data, especially designed to cope with diseased shoulders. Direct volume rendering, or DVR, is a useful visualisation technique that is often applied as part of medical visualisation solutions. A crucial component of an effective DVR visualisation is a suitable transfer function that assigns optical characteristics to the data. Finding a suitable transfer function is a challenging task. We present two highly interactive methods that facilitate this process. We also present a method for interactive direct volume rendering on ubiquitous low-end graphics hardware. This method, called ShellSplatting, is optimised for the rendering of bony structures from CT data and supports the hardware-assisted blending of traditional surface rendering and direct volume rendering. This characteristic is useful in surgical simulation applications. ShellSplatting is based on the object-order splatting of discrete voxels. As such, maintaining a correct back-to-front or front-to-back ordering during rendering is crucial for correct images. All existing real-time perspective projection visibility orderings show artefacts when splatting discrete voxels. We present a new ordering for perspective projection that remedies these artefacts without a noticeable performance penalty.","medical visualisation; pre-operative planning; volume visualisation; image processing; ct data segmentation","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:e6fc993f-e5ce-435a-83c4-866d57fe573d","http://resolver.tudelft.nl/uuid:e6fc993f-e5ce-435a-83c4-866d57fe573d","Spatial filtering of interfering signals at the initial Low Frequency Array (LOFAR) phased array test station","Boonstra, A.J.; Van der Tol, S.","","2005","The Low Frequency Array (LOFAR) is a radio telescope currently being designed. Its targeted observational frequency window lies in the range of 10–250 MHz. In frequency bands in which there is interference, the sensitivity of LOFAR can be enhanced by interference mitigation techniques. In this paper we demonstrate spatial filtering capabilities at the LOFAR initial test station (ITS) and relate it to the LOFAR radio frequency interference mitigation strategy. We show that in frequency ranges which are occupied with moderate?intensity man?made radio signals, the strongest observed astronomical sky sources can be recovered by spatial filtering. We also show that under certain conditions, intermodulation products of point?like interfering sources remain point sources. This means that intermodulation product filtering can be done in the same way as for “direct” interference. We further discuss some of the ITS system properties such as cross?talk and sky noise limited observations. Finally, we demonstrate the use of several beam former types for ITS.","interference mitigation; array processing; radio astronomy","en","journal article","American Geophysical Union","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Engineering","","","",""
"uuid:13a49641-d351-4fc1-a108-ea0a90033ae6","http://resolver.tudelft.nl/uuid:13a49641-d351-4fc1-a108-ea0a90033ae6","Learning-based model predictive control for Markov decision processes","Negenborn, R.R.; De Schutter, B.; Wiering, M.A.; Hellendoorn, H.","","2005","We propose the use of Model Predictive Control (MPC) for controlling systems described by Markov decision processes. First, we consider a straightforward MPC algorithm for Markov decision processes. Then, we propose value functions, a means to deal with issues arising in conventional MPC, e.g., computational requirements and sub-optimality of actions. We use reinforcement learning to let an MPC agent learn a value function incrementally. The agent incorporates experience from the interaction with the system in its decision making. Our approach initially relies on pure MPC. Over time, as experience increases, the learned value function is taken more and more into account. This speeds up the decision making, allows decisions to be made over an infinite instead of a finite horizon, and provides adequate control actions, even if the system and desired performance slowly vary over time. If you want to cite this report, please use the following reference instead: R.R. Negenborn, B. De Schutter, M.A. Wiering, and H. Hellendoorn, “Learning-based model predictive control for Markov decision processes,” Proceedings of the 16th IFAC World Congress, Prague, Czech Republic, 6 pp., July 2005. Paper 2106 / We-M16-TO/2.","Markov decision processes; predictive control; learning","en","report","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:1970e1d9-acbe-4fc4-8940-0dd657dbc04d","http://resolver.tudelft.nl/uuid:1970e1d9-acbe-4fc4-8940-0dd657dbc04d","The analytical hierarchy process applied for design analysis","Ciftcioglu, O.; Sariyildiz, I.S.","","2005","Being an intelligent activity, design is a complex process to accomplish. The complexity stems from the elusive character of this activity, which cannot be explained in precise terms, in general. In a design process, the determined relationships among the design elements provide important information to understand the role of each element with respect to others thereby improving the design. For this aim the method of analytical hierarch process (AHP) is employed which provides hierarchical priorities of the design elements with respect to the parsed design goal. The priority information is extended to establish hierarchical relations among the elements as a novel approach to employ in architectural design process.","design analysis; attribute relations; analytical hierarchy process (AHP)","en","conference paper","Optima","","","","","","","","Architecture and The Built Environment","Architectural Engineering +Technology","","","",""
"uuid:5e4189ec-3d55-490b-aacc-b0b69051deb4","http://resolver.tudelft.nl/uuid:5e4189ec-3d55-490b-aacc-b0b69051deb4","The analytical hierarchy process applied for design analysis","Ciftcioglu, O.; Sariyildiz, I.S.","","2005","Being an intelligent activity, design is a complex process to accomplish. The complexity stems from the elusive character of this activity, which cannot be explained in precise terms, in general. In a design process, the determined relationships among the design elements provide important information to understand the role of each element with respect to others thereby improving the design. For this aim the method of analytical hierarch process (AHP) is employed which provides hierarchical priorities of the design elements with respect to the parsed design goal. The priority information is extended to establish hierarchical relations among the elements as a novel approach to employ in architectural design process.","design analysis; attribute relations; analytical hierarchy process (AHP)","en","conference paper","","","","","","","","","Architecture and The Built Environment","","","","",""
"uuid:caa1942c-4180-4a17-88db-cec359490aad","http://resolver.tudelft.nl/uuid:caa1942c-4180-4a17-88db-cec359490aad","Radio Frequency Interference Mitigation in Radio Astronomy","Boonstra, A.J.","van der Veen, A.J. (promotor)","2005","The next generation of radio telescopes is expected to be one to two orders of magnitude more sensitive than the current generation. Examples of such new telescopes are the Low Frequency Array (LOFAR), currently under construction in the Netherlands, and the Square Kilometer Array (SKA), currently in a concept study phase. Another trend is that technological advances in the fields of electronics and communications systems have led to a vast increase in radio communication applications and systems, and also to an increasing demand for radio spectrum. These two trends, more sensitive telescopes and a much denser spectrum use, imply that radio astronomy will become more vulnerable to interference from radio transmitters. Although protection criteria exist for radio astronomy, it becomes increasingly difficult to keep the radio astronomy frequency bands free from interference. In order to mitigate interference in radio astronomical data, filtering techniques can be used. In this thesis, modern array signal processing techniques have been applied to narrow-band multichannel interference detection and excision, and to narrow-band spatial interference filtering. By investigating the subspace structure of the telescope array output covariance matrices, new results were found, such as upper limits on interference residuals after excision and spatial filtering. The effect of bandwidth, extendedness of the interfering sources, and multipath effects on the detection and spatial filter effectiveness were studied as well. The advantage of a multichannel approach over a single telescope approach was demonstrated by using experimental data from the Westerbork Synthesis Radio Telescope (WSRT). As the performance of mitigation algorithms can be improved by calibration of the telescope gains and noise powers, calibration algorithms were developed. These algorithms were verified both for single and dual polarised arrays. Finally, a LOFAR interference mitigation strategy was developed.","interference; interference mitigation; radio astronomy; array signal processing; calibration; spatial filtering; excision","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:72df1ede-90a5-4047-a77e-7e4758b6ca32","http://resolver.tudelft.nl/uuid:72df1ede-90a5-4047-a77e-7e4758b6ca32","Which standards’ characteristics increase system flexibility? Comparing ICT and Batch Processing Infrastructures","Egyedi, T.M.; Verwater-Lukszo, Z.","","2005","Most large Information and Communication Technology (ICT) systems develop in a piece-meal fashion. Their complexity and evolution is difficult to manage. They lack flexibility. This contrasts sharply with system design in the batch-wise processing industry, where flexibility has always had a high priority. In this industry, the S88 standard plays an important flexibilityenhancing role. The paper compares the two fields of technology and explores which standards’ characteristics increase system flexibility. It examines whether flexibility objectives in both fields differ, and what constitutes a ‘flexible standard’. Four standards’ characteristics turn out to be important: degree of specificity, level of abstraction, system level, and degree of simplicity. They seem to be a necessary condition for standards to create flexible systems, but whether they are a sufficient condition cannot yet be said.","standard; flexibility; LTSs; information technology; batch processing; S88; OSI; Internet","en","journal article","Elsevier","","","","","","","","Technology, Policy and Management","Information and Communication Technology","","","",""
"uuid:af3be91f-578a-4621-ad2b-7e8d6022d2ed","http://resolver.tudelft.nl/uuid:af3be91f-578a-4621-ad2b-7e8d6022d2ed","Regularized phase tracker with isophase scanning strategy for analysis of dynamic interferograms of nonwetting droplets under excitation","Van den Doel, L.R.; Nagy, P.T.; Van Vliet, L.J.; Neitzel, P.","","2005","The surface of a nonwetting droplet is separated from a solid surface by a continuous supply of a lubricating gas film within the apparent contact region. Under certain conditions, e.g., application of an external excitation force, the gas film thickness can decrease to a level where intermolecular forces cause the droplet to wet the surface. The thickness of the lubricating film can be measured by interferometry. Externally imposed oscillations change the shape of the film, leading to dynamic interference fringes that are recorded with a high-speed CCD camera. We propose a spatiotemporal analysis of the interference patterns based on the regularized phase-tracker method. This well-known method minimizes a cost function to estimate the absolute phase of a single element in the interferogram. A proper scanning method along all elements of the interferogram is necessary to avoid phase estimation errors that will propagate throughout the entire continuous phase image of interest. The scanning method we propose traces along contours of constant phase in the interferogram and does not require segmentation of the interferogram in dark and bright fringes. Results in the form of dynamic height profiles of droplets under excitation obtained by this method are presented.","digital image processing; fringe analysis","en","journal article","Optical Society of America","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:3aff7642-3666-41c8-8458-29915054dfc8","http://resolver.tudelft.nl/uuid:3aff7642-3666-41c8-8458-29915054dfc8","Hammersley's process with sources and inks","Cator, E.A.; Groeneboom, P.","","2005","We show that, for a stationary version of Hammersley’s process, with Poisson “sources” on the positive x-axis, and Poisson “sinks” on the positive y-axis, an isolated second-class particle, located at the origin at time zero, moves asymptotically, with probability 1, along the characteristic of a conservation equation for Hammersley’s process. This allows us to show that Hammersley’s process without sinks or sources, as defined by Aldous and Diaconis [Probab. Theory Related Fields 10 (1995) 199–213] converges locally in distribution to a Poisson process, a result first proved in Aldous and Diaconis (1995) by using the ergodic decomposition theorem and a construction of Hammersley’s process as a one-dimensional point process, developing as a function of (continuous) time on the whole real line. As a corollary we get the result that EL(t, t)/t converges to 2, as t??, where L(t, t) is the length of a longest North-East path from (0, 0) to (t, t). The proofs of these facts need neither the ergodic decomposition theorem nor the subadditive ergodic theorem. We also prove a version of Burke’s theorem for the stationary process with sources and sinks and briefly discuss the relation of these results with the theory of longest increasing subsequences of random permutations.","Longest increasing subsequence; Ulams problem; Hammersleys process; local Poisson convergence; totally asymmetric simple exclusion processes (TASEP); secondclass particles; Burkes theorem","en","journal article","Institute of Mathematical Statistics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:0fa61991-0b71-432f-962a-0bec67e7f3da","http://resolver.tudelft.nl/uuid:0fa61991-0b71-432f-962a-0bec67e7f3da","Design reuse in product shape modeling: A study of freeform feature reuse by signal processing techniques","Wang, C.","Stappers, P.J. (promotor); Vergeest, J.S.M. (promotor)","2005","Lack of facilities in supporting design reuse is a serious problem in product shape modeling, especially in computer-aided design systems. This becomes a bottleneck of fast shape conceptualization and creation in consumer product design, which consequently prohibits creativity and innovation. In the past, several efforts have been made in order to improve this situation, with confined methodologies in the spatial domain, following conventional ways of geometrical operations. These domain dependent researches did not yield satisfactory solutions. Looking at the state of the art technologies, to find a better solution, an investigation applying interdisciplinary knowledge has to be conducted. The present study aimed at finding a systematic approach to support design reuse in shape modeling, especially Freeform Feature (FFF) reuse, by hypothesizing that a better solution could be achieved by applying signal processing techniques. This global goal was further decomposed into a number of concrete objectives, each correlated to a broad spectrum of domain specific knowledge. Investigations on relevant subjects enrich the aggregation of knowledge, especially that concerning computer-assisted technologies in industrial design field. Solutions of this study functionally extend the capability of shape modeling, and enhance the interchange ability of shape depiction between the spatial and the frequency domain. A number of examples were employed to test the methods and mathematical formulations proposed. The results affirm that the hypothesis works, and the methodology developed in this research are both effective and beneficial.","CAD/CAM; shape modeling; Fourier transforms; signal processing; shape descriptor","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:3841e7fc-b740-4450-a480-4aca55e670e6","http://resolver.tudelft.nl/uuid:3841e7fc-b740-4450-a480-4aca55e670e6","Three-dimensional analysis tool for segmenting and measuring the structure of telomeres in mammalian nuclei","Vermolen, B.J.; Young, I.T.; Chuang, A.; Wark, L.; Chuang, T.; Mai, S.; Garini, Y.","","2005","Quantitative analysis in combination with fluorescence microscopy calls for innovative digital image measurement tools. We have developed a three-dimensional tool for segmenting and analyzing FISH stained telomeres in interphase nuclei. After deconvolution of the images, we segment the individual telomeres and measure a distribution parameter we call ?T . This parameter describes if the telomeres are distributed in a sphere-like volume (?T ? 1) or in a disk-like volume (?T » 1). Because of the statistical nature of this parameter, we have to correct for the fact that we do not have an infinite number of telomeres to calculate this parameter. In this study we show a way to do this correction. After sorting mouse lymphocytes and calculating ?T and using the correction introduced in this paper we show a significant difference between nuclei in G2 and nuclei in either G0/G1 or S phase. The mean values of ?T for G0/G1, S and G2 are 1.03, 1.02 and 13 respectively.","telomeres; 3D imaging; image processing; fluorescence microscopy; FISH","en","conference paper","SPIE","","","","","","","","Applied Sciences","Quantitative Imaging Group","","","",""
"uuid:97cd3281-bbd2-4cbb-babc-998d9059cbc0","http://resolver.tudelft.nl/uuid:97cd3281-bbd2-4cbb-babc-998d9059cbc0","Toepasbaarheid optische golfmetingen te Egmond","Cohen, A.B.; Aarninkhof, S.G.J.","","2005","","beeldverwerking; image processing; golfmeting; wave measurement; golfkarakteristieken; wave characteristics; Noord-Holland","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:987bc48f-13e3-4cf8-9f6b-404865b161e9","http://resolver.tudelft.nl/uuid:987bc48f-13e3-4cf8-9f6b-404865b161e9","EP Adoption and non-adoption: More than just the mirror image?","Hultman, J.; Reunis, M.R.B.; Santema, S.C.","","2005","","adoption; non-adoption; process; e-procurement","en","conference paper","EIPM","","","","","","","","Aerospace Engineering","","","","",""
"uuid:82683a15-8f9f-4192-b1c1-28a120d6c67e","http://resolver.tudelft.nl/uuid:82683a15-8f9f-4192-b1c1-28a120d6c67e","Metadata as a means for correspondence on digital media","Stouffs, R.; Kooistra, J.; Tuncer, B.","","2004","Metadata derive their action from their association to data and from the relationship they maintain with this data. An interpretation of this action is that the metadata lays claim to the data collection to which it is associated, where the claim is successful if the data collection gains quality as a result of it. We assume that the design process manifests itself in this way: the designer lays claim to data in such a way that this data gains quality. Claims form part of a complex adaptive system in which agreement on the quality of claims is achieved through correspondence. Applied in the context of a design studio, the result is a digital media library that is both the subject and result of the educational process. By teaching students how to express and utilise these claims and their qualities in their communication with peers, they can learn to become more effective in their use of information from various sources to support such communication. They will also learn how to build digital media libraries as a collective result of their communication. In this paper, we describe a methodology for adding, utilising and managing metadata and present some intermediate results from implementing this methodology into education.","metadata, learning process, design analysis, architectural language, e-learning","en","journal article","International Council for Research and Innovation in Building and Construction","","","","","","","","Architecture","","","","",""
"uuid:41b62a9a-c15e-4026-b600-66ca4701941c","http://resolver.tudelft.nl/uuid:41b62a9a-c15e-4026-b600-66ca4701941c","Filament Winding. A Unified Approach","Koussios, S.","Beukers, A. (promotor); Gurdal, Z. (promotor); Van Tooren, M.J.L. (promotor)","2004","In this dissertation we have presented an overview and comprehensive treatment of several facets of the filament winding process. With the concepts of differential geometry and the theory of thin anisotropic shells of revolution, a parametric shape generator has been formulated for the design procedure of optimal composite pressure vessels in particular. The mathematical description of both geodesic and non-geodesic roving trajectories has been presented, including a proposal for a mandrel shape that facilitates the experimental procedure for the determination of the coefficient of friction. In addition, an overview of several (non-) geodesic trajectories is here given. Furthermore, an algorithm for the automatic generation of suitable winding patterns has been outlined, in combination with several pattern optimisation strategies. An extensive treatment of the kinematics of filament winding is here presented, in combination with several recommendations for a proper derivation of the associated velocities and accelerations to which the moving machine parts and the roving itself are subjected. A simplified collision control module has resulted in the determination of the limits where the feed eye is allowed to move in. Within this space and with the dynamic machine limits, an optimisation problem has been set up, serving the aim of production time minimisation. This has been achieved by application of dynamic programming that minimises a summation of constraint respecting time increments, after the realisation of a grid-reduction with a technique that is based on elementary sparse matrix multiplication. Furthermore, several novel machine configurations have been proposed, which are dedicated to pressure vessels with various aspect ratios, shape morphology and types of applied wound circuits. With the shell equilibrium equations as a basis, we have derived the class of articulated pressurisable structures, comprising isotensoids that are axially stacked on each other. Moreover, the non-geodesically overwound isotensoid has been introduced, together with a variant being additionally subjected to external radial forces. The same equilibrium equations have generated shapes like the geodesically overwound hyperboloid and optimal toroidal pressure vessels. Furthermore, we have proposed several application fields for these items. As a leitmotiv throughout the thesis, the derived methodologies and equations have been applied on the class of isotensoid pressure vessels. The results generated by the roving trajectories description modules and pattern generation algorithms are verified by simulation, while the results of the kinematic solver and the optimiser are evaluated by both simulation and implementation on a winding machine. However, mechanical testing of the proposed structures and test-running of the introduced machine configurations must here be left over to the recommendations.","filament winding; pressure vessel design; production process; optimisation","en","doctoral thesis","Delft University Press","","","","","","","","Aerospace Engineering","","","","",""
"uuid:91e396a8-5857-4e93-ba27-8a82d0d9f2b7","http://resolver.tudelft.nl/uuid:91e396a8-5857-4e93-ba27-8a82d0d9f2b7","PECVD silicon carbide: A structural material for surface micromachined devices","Pham, H.T.M.","Sarro, P.M. (promotor)","2004","","Silicon carbide material; surface micromachining; post-processing; micro-electromechanical systems","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:ca444927-aa5b-432b-85ef-326e4409a069","http://resolver.tudelft.nl/uuid:ca444927-aa5b-432b-85ef-326e4409a069","Information processing in design","Restrepo-Giraldo, J.D.","Green, W.S. (promotor); Rodríguez, A. (promotor)","2004","","information processing; design methodology; design strategies; fixation; QBE systems; relevance criteria; design requirements; design precedents","en","doctoral thesis","Delft University Press","","","","","","","","Industrial Design Engineering","","","","",""
"uuid:048373b6-7e28-495d-b561-898d2474a65f","http://resolver.tudelft.nl/uuid:048373b6-7e28-495d-b561-898d2474a65f","Golfstatistiek op relatief diep water 1979-2002","Weerts, A.H.; Diermanse, F.L.M.","","2004","","golfgegevens; wave data; gegevensverwerking; data processing","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:ba65b2e1-3a02-4c80-b4cd-9672799123df","http://resolver.tudelft.nl/uuid:ba65b2e1-3a02-4c80-b4cd-9672799123df","Haalbaarheidsstudie HYDRA-K: Aanvullend onderzoek","Diermanse, F.L.M.","","2004","","stochastische processen; stochastic processes; golflengte; wave length; waterkeringen; flood protection works","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:12a80c76-e2d3-4938-afc6-7fb47e8e399a","http://resolver.tudelft.nl/uuid:12a80c76-e2d3-4938-afc6-7fb47e8e399a","Democratic processing: Mastering the complexity of communicating systems","van Dijk, H.W.","Sips, H.J. (promotor); Lagendijk, R.L. (promotor)","2004","","QoS; context aware processing; multidisciplinary optimisation; multiobjective optimisation; democratic processing; communicating systems","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:6083701f-09fa-4cc2-b45e-b1b74c117994","http://resolver.tudelft.nl/uuid:6083701f-09fa-4cc2-b45e-b1b74c117994","A step forward in the theory and practice of ICT management simulation","Ilkov, I.G.","Looijen, M. (promotor)","2004","The popularity of animated simulation as a tool for modeling and analyzing business processes is growing. This is due to the fact that it offers a number of benefits for modeling such processes, among which the ability to capture their stochastic character, represent the changes in their characteristics in the course of time and model and visualize their dynamic behavior. The current research focuses on the application of animated simulation for the modeling of ICT (Information and Communication Technology) management processes. A number of issues in building simulation models of such processes are identified, which include the lack of generally adopted concepts and approaches and the low degree of comparability and reusability of the results of simulation studies. In order to address these issues a conceptual framework, a set of reusable simulation constructs and a step-by-step approach for simulating ICT management processes were developed as part of this research. The conceptual framework allows for capturing relevant characteristics of these processes, among which the tasks carried out as part of the processes, the workplaces at which they are carried out, the exchanged information and the used equipment. The simulation constructs, implemented as a simulation template in the ARENA simulation environment, provide the programming definitions and logic necessary to represent these characteristics in a simulation model. The step-by-step approach describes the steps that have to be taken to build such a model using the developed conceptual framework and simulation constructs. The way in which the conceptual framework, the simulation template and the step-by-step approach can be applied for building ICT management simulation models is described based on two test cases carried out in two organizations in The Netherlands.","ict management processes; simulation","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:d8339178-a237-4dbb-8e2b-df3783d87282","http://resolver.tudelft.nl/uuid:d8339178-a237-4dbb-8e2b-df3783d87282","Experience-based model predictive control using reinforcement learning","Negenborn, R.R.; De Schutter, B.; Wiering, M.A.; Hellendoorn, J.","","2004","Model predictive control (MPC) is becoming an increasingly popular method to select actions for controlling dynamic systems. TraditionallyMPC uses a model of the system to be controlled and a performance function to characterize the desired behavior of the system. The MPC agent finds actions over a finite horizon that lead the system into a desired direction. A significant problem with conventional MPC is the amount of computations required and suboptimality of chosen actions. In this paper we propose the use of MPC to control systems that can be described as Markov decision processes. We discuss how a straightforward MPC algorithm for Markov decision processes can be implemented, and how it can be improved in terms of speed and decision quality by considering value functions. We propose the use of reinforcement learning techniques to let the agent incorporate experience from the interaction with the system in its decision making. This experience speeds up the decision making of the agent significantly. Also, it allows the agent to base its decisions on an infinite instead of finite horizon. The proposed approach can be beneficial for any system that can be modeled as Markov decision process, including systems found in areas like logistics, traffic control, and vehicle automation.","Markov decision process; model predictive control; reinforcement learning","en","conference paper","TRAIL Research School","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:5dbde3b6-9c33-422b-8fe7-5bd4b3c81d58","http://resolver.tudelft.nl/uuid:5dbde3b6-9c33-422b-8fe7-5bd4b3c81d58","Praktische facetten inzet Argus","Aarninkhof, S.G.J.","","2004","","beeldverwerking; image processing; kustbeheer; coastal zone management; programma-ontwikkeling; software development; kosten-batenanalyse; cost benefit analysis","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:e5ff09af-7704-4a46-b955-7509b6e89b4a","http://resolver.tudelft.nl/uuid:e5ff09af-7704-4a46-b955-7509b6e89b4a","Process system innovation by design: Towards a sustainable petrochemical industry","Dijkema, G.P.J.","Weijnen, M.P.C. (promotor); Grievink, J. (promotor)","2004","","fuel cells; functional modelling; innovation; olefins; petrochemical industry; process systems engineering; sustainable development","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:2fdeaa3e-645f-4d4d-a3b3-1bd7a852575c","http://resolver.tudelft.nl/uuid:2fdeaa3e-645f-4d4d-a3b3-1bd7a852575c","Golfstatistiek op diep water 2002 fase 1: Afleiden golfstatistieken","Weerts, A.H.; Diermanse, F.L.M.","","2004","","golfgegevens; wave data; programma-ontwikkeling; software development; gegevensverwerking; data processing","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:e0d5af2b-7bf6-4cd8-9a22-aac3f8d112fe","http://resolver.tudelft.nl/uuid:e0d5af2b-7bf6-4cd8-9a22-aac3f8d112fe","Theoretical and practical aspects of modelling activated sludge processes","Meijer, S.C.F.","Van Loosdrecht, M.C.M. (promotor); Heijnen, J.J. (promotor)","2004","This thesis describes the full-scale validation and calibration of a integrated metabolic activated sludge model for biological phosphorus removal. In chapters 1 and 2 the metabolic model is described, in chapters 3 to 6 the model is tested and in chapters 7 and 8 the model is put into practice. Chapter 1 is a general introduction to this research. Chapter 2 is a more specific introduction to the metabolic biological phosphorus removal (BioP) model. The goal of this introduction is to obtain a better understanding of the stoichiometric and kinetic structure of the metabolic model and the role of the storage polymers glycogen, poly-phosphate and PHB. In chapters 3 and 4, the model is validated at full-scale conditions. Chapter 3 describes the simulation of a full-scale nutrient removing WWTP at steady state conditions. In chapter 4, the start-up of a full-scale WWTP is simulated. Under start-up conditions model kinetics can be tested more extensively than is possible at steady state conditions. In chapter 5, a method presented for data evaluation, reconciliation and model calibration. The method is tested in a full-scale simulation study. In chapter 6, all previous modelling experiences with the metabolic BioP model are evaluated. The adapted model is tested on the basis of several lab-scale experiments. The updated version of the integrated model is presented in appendix III. In chapters 7 and 8, the model is put into practice. In a case study, a process control based on the oxidation reduction potential was evaluated. On the basis of a literature study, in chapter 7 the physical meaning of measuring the oxidation reduction potential in activated sludge is discussed. In chapter 8, it is demonstrated how the model can be used for (ORP related) process control and control design.","modelling; activated sludge; biological; wastewater treatment process; biological phosphorus removal; orp control; calibration; data evaluation","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:b261f904-bdbd-4bce-b9eb-32ad19ad3eb2","http://resolver.tudelft.nl/uuid:b261f904-bdbd-4bce-b9eb-32ad19ad3eb2","Sustainable Chemical Processes and Products. New Design Methodology and Design Tools","Korevaar, G.","Harmsen, G.J. (promotor)","2004","The current chemical industry is not sustainable, which leads to the fact that innovation of chemical processes and products is too often hazardous for society in general and the environment in particular. It really is a challenge to implement sustainability considerations in the design activities of chemical engineers. Therefore, the main question of this thesis is: how can a trained chemical engineer develop a conceptual design of a chemical process or a chemical product in such a way that the final result clearly contributes to sustainable development? This question is answered after a profound discussion about the current chemical engineering practice and its relation to the sustainability debate. This dissertation claims that sustainable development of chemical engineering practices requires a general design methodology accompanied by a set of design tools. Such a combination of methodology and tools does not exist in the chemical engineering field. The author developed a new design methodology and seven new design tools that enable the incorporation of sustainability issues into the design practice of the chemical engineering field. The application and validity of the methodology and its tools are shown in seven, mainly industrial, case studies.","design methdology; design tool; sustainable development; chemical engineering; chemical processes; chemical products","en","doctoral thesis","Eburon, Delft","","","","","","","","Applied Sciences","","","","",""
"uuid:9886c180-64db-41dc-a9b5-1f2d07174f1a","http://resolver.tudelft.nl/uuid:9886c180-64db-41dc-a9b5-1f2d07174f1a","The rho-trimedia processor","Sima, M.","Vassiliadis, S. (promotor)","2004","","VLIW processors; reconfigurable hardware; media processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:de60c1ea-b6de-4600-aa8a-885dedb145cb","http://resolver.tudelft.nl/uuid:de60c1ea-b6de-4600-aa8a-885dedb145cb","Structure Characterization Using Mathematical Morphology","Luengo Hendriks, C.L.","Van Vliet, L.J. (promotor)","2004","This thesis deals with the application of mathematical morphology to images of some kind of structure, with the intention of characterizing (or describing) that structure. The emphasis is placed on measuring properties of the real-world scene, rather than measuring properties of the digital image. That is, we require that the measurement tools are sampling-invariant, or at least produce a sampling-related error that is as small as possible. Filters defined by mathematical morphology can be defined both in the continuous space and the sampled space, but will produce different results in both spaces. We term these differences ""discretization errors"". Many of the results presented in this thesis decrease the discretization errors of morphological filters.","granulometry; size distribution; structuring element; sampling invariance; translation invariance; rotation invariance; image analysis; image processing","en","doctoral thesis","Pattern Recognition Group","","","","","","","","Applied Sciences","","","","",""
"uuid:a6c6a630-e021-4931-a9d3-6f83b42595a6","http://resolver.tudelft.nl/uuid:a6c6a630-e021-4931-a9d3-6f83b42595a6","A decision support method for ICT investment problems: Identification and justification of information process improvements","Poppeliers, J.L.","","2004","","ICT; information process","","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:b2fe0604-0d30-4cb8-856e-65941fb9da59","http://resolver.tudelft.nl/uuid:b2fe0604-0d30-4cb8-856e-65941fb9da59","Refiring bricks at 540ºC: Hot masonry and magnetic separation close the brick recycling process","Van der Graaf, A.","Van Dijk, K (contributor); Hendriks, C. (contributor)","2004","For many decades, stony debris from building and demolition sites was reused as road building material. Until recently there was no need to look for other uses for this mixture of concrete and masonry rubble. However, now that our supplies of marl and gravel (two of the three ingredients of mortar and concrete) are dwindling and prices are beginning to rise, the concrete industry is showing a growing interest in ways to recycle concrete rubble. Since masonry rubble can only be used as a granulate for road construction in combination with concrete rubble, a vision of a masonry rubble heap without any takers looms ahead if no alternative application is found. Anticipating the problem,civil engineer Koen Van Dijk of the Civil Engineering Materials Science department at tu Delft has developed a number of processing techniques for reusing masonry rubble. Van Dijk was recently gained his doctorate for developing a process that can extract 50% of the bricks from masonry debris from buildings that have been dismantled selectively. He also uses a magnetic technique to extract the brick fraction from the remaining masonry debris. Mixed with fresh clay it becomes the raw material for a new generation of bricks, thus closing the clay brick cycle.","masonry rubble; processing techniques","en","journal article","Delft University of Technology","","","","","","","","Delft University of Technology","","","","",""
"uuid:1449da2c-6b5b-4cd5-b68f-a5608a877b5f","http://resolver.tudelft.nl/uuid:1449da2c-6b5b-4cd5-b68f-a5608a877b5f","Optimisation of polishing processes by using iTIRM for in-situ monitoring of surface quality","Meeder, M.; Mauret, T.; Booij, S.; Braat, J.; Faehnle, O.","","2003","The possibilities of iTIRM, an in-process surface measurement tool, are explored in this research. Experiments are done to test the applicability for qualifying and optimising finishing processes for optical surfaces. Several optical glasses, different polishing agents and ductile grinding are included in these experiments. It is concluded that iTIRM can be used for both mentioned applications but that it is, at least for now, an R&D tool only and not applicable in production.","surface measurement; in-process measurement; polishing; ductile grindling","en","conference paper","SPIE","","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:cfded0ca-6141-4c5e-ac8b-4c4c70328ba2","http://resolver.tudelft.nl/uuid:cfded0ca-6141-4c5e-ac8b-4c4c70328ba2","Dry-cleaning with high-pressure carbon dioxide","Van Roosmalen, M.J.E.","Witkamp, G.J. (promotor)","2003","Dry-cleaning is a process for removing soils and stains from fabrics and garments which uses a non-aqueous solvent with detergent added. The currently most used dry-cleaning solvent is perchloroethylene (PER), which is toxic, environmentally harmful and suspected to be carcinogenic. Carbon dioxide could be an ideal solvent to replace PER; carbon dioxide is non-toxic, non-flammable, ecologically sound, cheap, non-corrosive, available on a large scale, and can therefore serve as a permanent sustainable alternative for the currently used solvents. In this work, a dry-cleaning process using high-pressure carbon dioxide has been investigated and optimized. A disadvantage of CO2 is its limited ability to dissolve polar molecules. However, the characteristics of CO2 can be modified by the addition of a co-solvent. Various co-solvents have been investigated of which 2 propanol (IPA) was the most suitable. For most non-particulate soils, the results using CO2, water and IPA were comparable to the results using PER. For particulate soils, however, the cleaning-results using CO2, water and IPA were worse than with PER. Particulate soils can be removed from textile by mechanical action and/or surfactants. Only relatively large particles (>20 µm) could be removed in CO2 by increasing the mechanical action. Unfortunately, increasing the mechanical action had no positive influence on the removal of small particles (<20 µm). In order to remove small particles in CO2, surfactants have to be used. Amino acid based surfactants have been studied. For the production of amino acid based surfactants, renewable, low-cost raw materials are used. Furthermore, these surfactants have a low toxicity, are biodegradable and are not irritating to the skin. These characteristics make the amino acid based surfactants attractive for dry-cleaning with carbon dioxide. The amino acid based surfactants gave good results for dry-cleaning with liquid CO2. The surfactant Amihope LL (N lauroyl L lysine) gave the best cleaning-results. An important process parameter using this surfactant was the addition of water. The addition of water is required for sufficient removal of non-particulate soils. However, when no water was added to the system, there was a large increase in particle removal. Therefore, a 2-bath process was proposed. The first bath is for particulate soil removal and has optimal conditions for particulate soil removal; the second bath has optimal conditions for non-particulate soil removal. The 2-bath process using Amihope LL gave good results: the result for particulate soil removal was 84 % compared to the results for PER, the result for non-particulate soil removal was 98 % compared to PER and the overall result was 92 % compared to PER. All surfactants that gave good results for particulate soil removal (anionic, amine and amino acid based surfactants) were, surprisingly, hardly soluble in CO2 and were (largely) present as solid particles. The mechanisms that may play a role in particulate soil removal using the surfactant Amihope LL were investigated. The cleaning action of the surfactant is probably a combination of adsorption and mechanical action. An economic evaluation shows that the costs for dry-cleaning using the optimized CO2-process are equal to the costs of the PER-process. Recycling of the surfactant and the co-solvent can lower the costs of the CO2-process further.","dry-cleaning; carbon dioxide; mechanical action; surfactants; process optimization","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:336986cc-f267-4b75-a9af-3f14c84ff82d","http://resolver.tudelft.nl/uuid:336986cc-f267-4b75-a9af-3f14c84ff82d","UWB Near-Range GPR Phase-Based Techniques for Profiling Rough Surfaces and Detecting Small Low-Contrast Shallow Subsurface Objects","Sai, B.","Ligthart, L.P. (promotor)","2003","","Ultra wideband radar; ground-penetrating radar; phase measurement; phase variation signatures; rough surfaces; coherent radar signal processing; landmine detection; subsurface imaging; surface profile","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:c5dda151-0e5e-4d15-9e22-8c92b271559e","http://resolver.tudelft.nl/uuid:c5dda151-0e5e-4d15-9e22-8c92b271559e","Automatic Lipreading in the Dutch Language","Wojdel, J.C.","Koppelaar, H. (promotor)","2003","This thesis deals with many aspects of the bimodal speech processing research. It lays out the general framework of visually enhanced speech processing computer systems together with some insight in the human speech bimodal speech perception. There are three main contributions to the field of bimodal speech processing presented in the thesis. Firstly, a novel approach to visual feature extraction suitable for lipreading part of the speech processing system (Lip Geometry Estimation) is presented and described in full detail. Another new, powerful concept which is presented in this thesis is Person Independent Feature Space (PIFS), which is qualitatively analyzed on basis of real-life recorded material. The quantitative improvements obtained by using PIFS for lipreading applications are also presented here. The last major contribution of this research is the Delft University of Technology Audio-Visual Speech Corpus (DUTAVSC). This corpus has been extensively used throughout the research and provides a good starting point for future development of lipreading-capable speech processing systems.","automatic lipreading; bimodal speech processing; artificial intelligence","en","doctoral thesis","LODTR S.A.","","","","","","","","Information Technology and Systems","","","","",""
"uuid:61acf9eb-5ddb-4634-96ad-c10e4817b9c6","http://resolver.tudelft.nl/uuid:61acf9eb-5ddb-4634-96ad-c10e4817b9c6","Quantification of 2D subtidal bathymetry from video","Roelvink, J.A.; Aarninkhof, S.G.J.; Wijnberg, K.M.; Reniers, A.J.H.M.","","2003","","kustmorfologie; coastal morphology; beeldverwerking; image processing; data-assimilatie; data assimilation","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:7313a454-bd35-43de-b911-217dabea2541","http://resolver.tudelft.nl/uuid:7313a454-bd35-43de-b911-217dabea2541","Knowledge-based Design: Developing Urban & Regional Design into a Science","Klaasen, I.T.","Drewe, P. (promotor)","2003","An implemented design of an urban area imposes long-term conditions on societal processes, such as the opportunities people have to organize their lives in temporospatial respects in a healthy and safe living environment, and the way social, cultural and economic institutions and organisations can function. In view of the fact that both people and institutions experience recurrent problems - ranging from getting lost in new housing estates to the awkward accessibility of workplaces - it is striking that in a world as ours laden as it is with scientific knowledge and its applications, the design/redesign of urban areas is based on scarcely any substantive-scientific knowledge in the area of urban & regional design. Designers are supplied with knowledge, particularly from social-spatial sciences, which is explicit, well-founded and open to critique, but design itself is not considered to be scientific. As to the professional field, there has been very little concern to develop a scientific foundation for urban & regional design, particularly during the last few decades. The widely held notion that each design is unique and based on individual creativity has hitherto left little room for thinking about urban & regional design as a science, whilst the immense complexity of urban areas plays a role as well. The assumptions underlying my research are that urban & regional design can be developed into a societally relevant science, that this depends on the view held regarding the significance of urban & regional design to society, and what is considered to be the object of the discipline derived from this view. I base these assumptions on the knowledge and insights I have acquired during the last fifteen years; the first ten years within the Chair of Urban & Regional Design, and after that within the Chair of Spatial Planning, both of the Faculty of Architecture of the Delft University of Technology. The research can therefore be characterised as an interpretative-theoretical study, a term from the methodologist A.D.de Groot ((1961) 1968: 325ff.). The characteristics of such a study are that within a particular collection of data (tentative) connections are made, that it must be impossible to solve the problem directly by experimental tests, and that the interpretation is not the only one possible. This makes the research an 'intellectual quest'. The first question that needs to be answered is where urban & regional design fits into the field of sciences, if a substantive-scientific approach is indeed possible. The standpoint from which I start my quest is that the real world exists independently of us as knowing subjects. As the cognitive power of human beings is limited by 'nature' and 'nurture', the real world is only knowable by approximation (the correspondence theory of truth). People are selective in their perception in accordance with general organization principles of simplification, categorisation and generalisation. They make connections, so forming a picture of how reality 'fits together', or can be fitted together. Reality, including urban reality, can therefore be approached as an (open) system, or a system of (open) systems of equal and unequal order. Elements of a system derive their significance (location value) from the position they hold in that system. Processes in a system can be either linear or cyclic. Whether changes are perceived in a spatial or temporal sense depends on the spatial or temporal grain of the perception (Jong 1992: 16). Within urban systems we can distinguish physical urban systems, made up of spatial elements such as buildings, streets, parks, sewers, stations, or made up of configurations of spatial elements like buildings, streets etc. which have certain characteristics in terms of form, physical state and function. These urban spatial objects, in mutually coherent combinations and in coherent combinations with natural spatial objects, have been and are constructed or reconstructed in order to fulfil a carrying function and an information function, on behalf of the urban society. The visual manifestation at a particular moment is called the composition of the urban area or urban landscape. Scientific knowledge is based upon rational considerations. Language, which includes visual language, has an organizational function in the thought process and is a means for conveying scientific ideas. Science limits 'chance' in the sense of 'random events'. Science presupposes generalization. For this purpose we must simplify systems and focus on similarities rather than differences. In order to communicate about and/or reflect on these systems we have to use models: conscious simplifications of (past, present and future) reality. Relevant to urban & regional design are pragmatic (analogue) models, particularly spatial ones, and from a functional viewpoint particularly descriptive, intentional-projective and exploratory-projective models. I therefore do not regard a model as an 'example to follow' as some (if not most) architects do. An urban or regional design is a proposal for a coherent package of spatial interventions in a certain urban or urbanescent area, and always affects more than one sector. Sciences may be divided into: (1) formal sciences without empirical content, (2) empirical sciences which concentrate on 'that which is (or was) the case' and therefore 'that which will probably be the case' and (3) practical, action-oriented sciences that have the application of science as their object of scientific research (Peursen 1986: 61) (Fig.A). The findings of practical sciences are then applied in concrete cases (Fig.B). Practical science is not knowledge that is acquired in practice (Drenth 1995: 157; Gunsteren 2001): practice only generates questions. The ultimate question for the practical sciences is 'does it work? i.e. which effects are to be expected and under what conditions. This involves both insight into constructive options and utilitarian options. Given the extrascientific problem statement a monodisciplinary approach is unlikely to be fruitful. On the basis of the above we can conclude that urban & regional design as a science would have to be categorised among the practical sciences. The same scientific rules and standards apply in empirical and practical sciences. In the views regarding these rules two main approaches can be distinguished: an objectivistic one and a subjectivistic/relativistic one, the primary exponents of which are, respectively, Karl Popper and Thomas Kuhn. Kuhn's conclusion that science should be conducted in a primarily non-rational, consensus-driven manner may be correct in a descriptive sense, but from a realistic view of scientific knowledge - the only one possible for practical sciences - this is not tenable as a goal. However 'natural' the inductive verification of hypotheses/theories may be, we do not necessarily have to follow this tendency, no more than we do the tendency to prefer 'certainty' over '' doubt'. Popper's views in their strictest sense are not tenable either. From an objectivistic point of view criticism has primarily been levelled at the fact that he regards the context of discovery as scientifically irrelevant: also hypotheses and theories can have a rational foundation. Popper's student, the objectivist Imre Lakatos - to some extent influenced by Kuhn - also criticised in particular the 'unnaturalness' of the exclusive focus on falsification instead of on verification/corroboration. This led him to develop a research approach, which due to his emphasis on heuristics, abduction and plausible reasoning, shows interesting similarities with (urban & regional) design processes. This approach offers perspectives for the development of urban & regional design into a practical-scientific discipline. In practical sciences the context of justification is however viewed in a different light from in empirical sciences. Ethical and financial considerations as well as the time factor may make it impossible to test a practical science hypothesis experimentally, under controlled, repeatable conditions. In these cases one will have to draw plausible conclusions on the basis of a series of applications, regarding the necessary conditions and effects that arise. Because the term 'context of justification' has much less significance in these cases, the term 'application' is preferable. In urban & regional design the above-mentioned testing limitations apply in a cumulative manner. Acquiring information from the context of application moreover is hindered because the conditions under which proposals are implemented in practice show relatively few similarities and these conditions cannot be manipulated. Based on this, the emphasis in the practical-scientific approach to urban & regional design lies on the context of discovery: what is assumed to be possible, and what are the probable effects, under which conditions. Empirical and formal scientific knowledge, in part derived from the context of application, should provide the necessary constraints. The next question is whether, in the development or non-development of 'urban and regional design' into a substantive-scientific discipline, what one regards as the object of that discipline - arising from what one considers to be the significance of urban & regional design to society - plays a part. To answer that question I position urban & regional design in relation to the disciplines of 'spatial planning' and 'architecture'. Lines of approach are (1) the systems approach to (future) urban reality, (2) the limitations inherent to working with spatial models of which the most important is the time factor and consequently that processes can only be shown indirectly (Fig.C), and (3) an examination in further detail of the concept of the 'carrying function' and the 'information function' (subjectively use value and experiential value). With regard to the latter two terms I argue, for example, that to be able to experience a physical urban system, unlike a building, the ability to use that system is a necessary condition. This positioning makes it clear that the general description of the physical urban system as the object of urban & regional design can be interpreted in two ways and in practice is indeed interpreted in two ways. The physical urban system can be seen on the one hand as an autonomous system, with the other components of the urban system as the system environment, and on the other hand as an inextricable component of the urban system as a whole. The first approach mentioned focuses on the composition of the physical system and on the linear processes in this system (characterized by a large temporal grain). I call this the pattern-oriented approach. This type of urban & regional design focuses on the so-called 'transformation' of urban areas. Influenced by the discipline 'architecture', and usually based on a quantitative programme of functional requirements, the creation of an experiential value guided by personal form concepts is seen as the main task for urban & regional designers. In part due to this emphasis on the design, pattern-oriented designs have the character of blueprints, which makes them fairly inflexible. In the second approach mentioned, which I call process-oriented, the focus is primarily on small-grained cyclic urban-societal processes with a spatial dimension. These processes do of course also have a large grain linear component. This design view emphasises the use value, with the experiential value as an essential, functional support of this value. Important components are the functional-spatial structures, which are supported by relevant visual design that facilitates any desired processes, and the potential user bases needed for the functioning of collective institutions. For process-oriented design it is sufficient to indicate the functional-spatial structure and a number of essential indications with regard to visual design. Urban & regional designers are insufficiently aware of distinction between these two types of approach. In addition to having shared concepts with similar definitions, pattern-oriented and process-oriented designs each have their own terminology as well as different definitions for the same concept. They also make use of different types of spatial models. This results in a lack of understanding in the field, confusion in the language used and insufficient insight into the societal significance of urban & regional design. We can conclude that the perspectives for a practical-scientific approach of urban & regional design differ according to which standpoint is adopted. Pattern-oriented design offers little perspective for a practical-scientific approach. We cannot ask 'does it work?' with regard to possible uses, unless the elements of physical urban systems are seen in mutual relation. 'Perception' therefore also has no functional significance. What is more, the emphasis lies on cultural and aesthetic aspects as well as personal form concepts. This is different for the process-oriented approach in which the physical urban system cannot be regarded as We can conclude that the perspectives for a practical-scientific approach of urban & regional design differ according to which standpoint is adopted. Pattern-oriented design offers little perspective for a practical-scientific approach. We cannot ask 'does it work?' with regard to possible uses, unless the elements of physical urban systems are seen in mutual relation. 'Perception' therefore also has no functional significance. What is more, the emphasis lies on cultural and aesthetic aspects as well as personal form concepts. This is different for the process-oriented approach in which the physical urban system cannot be regarded as separate from the urban system as a whole, and in which the use value is the primary point of interest and the perception of the physical urban system supports the use. I concretize these perspectives by outlining the relationship between urban & regional design and research in practice. In general, when making proposals for the spatial development of cities, there is a division of tasks whereby empirical scientists supply knowledge and insight into spatial planning, which is integrated by designers in a 'creative leap' into a design for a specific situation. In the Netherlands the term 'the unity of town planning' is applied (Lohuizen 1948: 3). This division of tasks also applies to means-oriented design, whereby the possibilities of the situation provide direction and the design result is evaluated ex ante. In practice, however, there seems to be a gap between empirical science and urban & regional design. There are a number of reasons for this 'applicability gap'. An important one is that increasingly more knowledge of a varied nature has become available, and designers as a consequence no longer have a comprehensive grasp of this knowledge, the more so as much of the information is irrelevant (Hillier, Musgrove & O'Sullivan 1972). Designers are consequently unable to let go of their preconceptions; on the contrary, they become more dependent upon them (ibid.). In the nineteen sixties the rising interest in the scientific approach to urban & regional design focused not only on the procedural side of design but also for a while on the substantive side (e.g. J. Jacobs 1961; Doxiadis 1968; Alexander 1977). However, there was and still is much resistance against the scientific approach to urban ®ional design: a rational, systematic approach is thought to adversely affect the essential creativity needed when making designs. Also the guild-like manner in which the community of urban & regional designers is organized, similarly to that of architects, does not stimulate the development of a scientific approach. An education and working climate that is characterized by a master-apprentice relationship, in which often no account is given of the resulting product, where the validity of claims are derived from the status of the speaker, and where debates regarding views held in the field are avoided rather than sought, is not the most conducive for a scientific development of that discipline. For so far as (realistic) scientific research is carried out in this context, it is almost without exception empirical descriptive research of a (cultural) historical nature: design research. For practical-scientific research one must not think in terms of each design being unique; in other words it is necessary to dissociate the object of design from the specific design context. This opens the doors for the design of theoretical models with spatial organization principles as 'building blocks': designs that in spatial-ecological and/or socio-cultural and/or economic-technical terms are independent of the situation. The activity of design acts to serve research and has become a research method: research by design. In research by design knowledge is not integrated directly and individually into each localized design, but general, integrated scientific urban & regional design knowledge is developed in an additional design phase. Explicit scientific knowledge is essential in this practical-scientific approach to urban & regional design as this makes a critical-rational debate regarding this knowledge possible. Theoretical models bridge the previously mentioned applicability gap. Creativity is crucial in both the development of this knowledge and its application in specific situations. If the research approach of Lakatos is related to the approach developed at the Chair of Urban & Regional Design, then I regard the following as elements of the hard core of the practical-scientific research programme: · Viewing the built (future) reality as an open system; · Approaching this physical urban system as an organized complex system; · Viewing the physical urban system as part of the urban system as a whole; · The fact that an element of this physical system derives its significance from its position in the system on the one hand and contributes to making the system what it is on the other; · The distinguishing of various temporal grains in societal processes; · The distinguishing of levels of scale within the physical urban system on the basis of societal processes that are characterised by a relatively small temporal grain; · The consequently necessary cohesion between the system levels; · The boundaries of design areas at various scales being defined on the bases of societal processes; and · Unlinking the design object from a specific design context; · Regarding design not just in the usual sense but also as a method of research. The essence of the research approach can be described as follows. Start with a number of basic elements from the object under study and manipulate them, in part on the basis of organizational principles, in such a way that the resulting theoretical models of physical urban systems are logically plausible and internally-consistent: constructions which, in the light of our available formal and empirical knowledge, are likely to function stably when implemented. Contextual conditions and effects analyses then have to be carried out for these basic theoretical models, in part based on empirical research. During the process of research by design there is also a continual ex ante evaluation. In order to limit the theoretically infinite number of possibilities, the breadth of useful research is determined by situations that occur in reality. Counter examples (Lakatos's 'monsters') play an important part: they increase the theoretical content of theoretical models. Urban & regional design entails the question of whether plausible spatial planning principles and plausible theoretical models can be derived from partially inconsistent information with heuristic and creative abduction as mechanisms (Schomburg 1991: 59; Magnani 2001: 78). Theoretical models are not ready-made templates for creating localized designs, but 'tools'. The task of the designer of a localized design is on the one hand to retain as much quality as possible of the chosen theoretical model - this can even be expanded with the help of specific situational potentials - and on the other to utilize the spatial individuality of the site in the design. The purpose of the latter is to bring about the spatial diversity that is necessary even if it is only to make the most of the information function. Theoretical models can be seen metaphorically as being made of elastic. The intended effects will have to be continually checked in concrete situations as the actual environment is not considered in the theoretical model. A designer should also consider any unintended consequences. Theoretical models can also play a role during a localized design process in the sense that a localized design problem is 'taken from' the specific situation and generalised. This is the transition area between 'research by design'; and 'research driven research'. A conscious simplification facilitates the studying of the (hypothesized) essence of the problem and establishes a relationship with generic urban & regional design knowledge. For several years now a debate has been going on at the Delft University of Technology whether a design can be regarded as scientific output. This debate seems to deal with the question of whether a design, a spatial model, is an acceptable means of communication. The debate in fact deals, or should deal, with the question of whether urban & regional design is indeed a (practical) scientific discipline. As there are limited possibilities for proving hypotheses this question ought to be taken seriously. The research approach described here together with the examples in this book of concrete research projects, allows this question to be answered affirmatively. This is supported by examples of practical scientific knowledge, dating mainly from the nineteen sixties and seventies, in the form of organizational principles and theoretical models (or initiatives in that direction). The research projects by academic staff and students described in this book also show that the research is bearing fruit, that in the terms of Lakatos there is a 'positive heuristic'. It cannot be concluded however, that this research programme is the only possible one. An interpretive-theoretical model study does not produce research results that exclude any other result. This makes this type of empirical research similar to design, in that it is a process that always has more than one possible outcome. More and better knowledge regarding urban & regional design, however useful in light of the spatial problems and consequently with regard to the functioning of people and institutions, does not in itself improve the (future) spatial situation. In a democratic context it is not the experts who determine what happens, but the elected administrators whom these experts supply with knowledge and insights. More and better knowledge does not necessarily mean that better decisions will be made. My findings lead to recommendations for university research and education. The most important of these is that a change in culture is necessary if urban & regional design is to be scientifically approached. The guild-like culture that characterises urban & regional design education should therefore be transformed into a culture in which general scientific rules are applied and taught, in which lecturers possess knowledge about the various views regarding science, in particular in relation to urban & regional design, and in which critical debates are encouraged. From a substantive viewpoint, not only spatial-ecological and economic-technological aspects but socio-cultural aspects of urban & regional design should receive attention. This attention should concern linear as well as cyclic processes including changes in these processes. Organizational principles and theoretical models should be further developed. Knowledge from other fields of science should be 'translated' into forms that are suitable for research by design. In view of the cross fertilisation between education and research - as shown in this book - the two have to be considered in close connection. Special attention should be given to those students who show interest in a scientific approach to the discipline. It is after all these students in particular who will help feed the body of knowledge of urban & regional design.","urban design; regional design; urban planning; regional planning; practical science; critical realism; context of application; lakatos; system; theoretical models; pattern; process; research by design","en","doctoral thesis","","","","","","","","","Architecture","","","","",""
"uuid:8f88a2e6-29d4-40ab-b79f-81b3b5a6d9de","http://resolver.tudelft.nl/uuid:8f88a2e6-29d4-40ab-b79f-81b3b5a6d9de","Use-driven product conceptualization based on nucleus modelling and simulation with scenarios","Van der Vegte, W.F.; Horváth, I.","","2003","Conventionally, simulation of product behaviour is employed as a pre-realization type of assessment at the end of the design process, making only late feedback for improvement possible. Enabling the start of optimization in the conceptualization is expected to have significant influence on design efficiency. However, the available information at that stage is uncertain, incomplete, multifold and imprecise, which calls for new simulation techniques. This paper proposes nucleus-based modelling and simulation as a solution. A nucleus is a modelling entity to capture the relationships between the lowest level metric elements of the product and to represent the physical effects governing the behaviour of the product. Tolerating uncertainty, incompleteness, modality and imprecision, a nucleus-based model is able to provide an integral model of the actors of the use process. Simulations are controlled by so-called scenarios that arrange a logical structure of feasible situations for the integral model. The paper describes the content of the nucleus-based integral model and presents an application case study to illustrate the potentials of this new approach.","conceptual product design; use process; scenario-based simulation; nucleus modelling","en","conference paper","SCS-Europe BVBA","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:a53c24ba-40e4-4d16-855e-4b5e6b3ba81e","http://resolver.tudelft.nl/uuid:a53c24ba-40e4-4d16-855e-4b5e6b3ba81e","Nucleus-based product conceptualization - Part 2. Application in designing for use","Van der Vegte, W.F.; Horváth, I.","","2003","when products are being designed, especially in conceptual design where there is a multitude of options for the designer. This paper presents a methodology to generate resource-integrated models unifying representations of the product, the user and the environment based on nucleus modelling, which is presented in Part 1 of this paper, according to scenarios that describe the use of products 1. This homogenous representation allows modelling both from an object point of view and a process point of view. A designer can model known use processes and obtain simulation-based predictions of ad-hoc situations. The nucleus is the lowest level modelling entity that can be used equally well in representing the three actors. The time dependent relations allow behavioural simulation in space and time that is the basis of use process modelling and forecasting. A case study is presented that clarifies the method of application and the achieved results. The simulation was based on a single scenario; management of multiple scenarios is an issue remaining for future research.","designing for use; conceptual design; simulation; process modelling","en","conference paper","The Design Society","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:22d9b8ce-e32d-40b6-8b84-4c08c16720ec","http://resolver.tudelft.nl/uuid:22d9b8ce-e32d-40b6-8b84-4c08c16720ec","Simulation Integrated Design for Logistics","Veeke, H.P.M.","Lodewijks, G. (promotor); Bikker, H. (promotor)","2003","The design of an innovative logistic system is a complex problem in the solution of which many disciplines are involved. Each discipline developed its own way of conceptual modeling for a logistic system based on a mono disciplinary perception. In essence this leads to a communication problem between the different disciplines and consequently to expectations on the formulated solution that don't correspond with the real solution. In this thesis a basic systems approach is used to define a conceptual model of a logistic system that can be used by all disciplines involved as a common reference leading to the design. A combination of a soft and a hard systems approach leads to a conceptual model in which the problem is formulated in terms of required performances and process structures. The logistic system is modeled as a structure of functions around three flows: orders, products and resources. The model evolves during the design project and is an enduring supporting tool for decision making with a clear relation to the systems objectives. This PROcess-PERformance model (PROPER) model is formulated in interdisciplinary terms and thereby enables the communication between different disciplines. The PROPER model only reflects the structure of a system; it does not model the time dependent behavior of the system. This behavior is essential for correct decision making, because it improves the understanding of parallel and stochastic aspects of the system. Usually this behavior is ""simulated"" on a computer. In practice simulation is only used during the final stages of a design project and then a correction of objectives and/or decisions is impossible or very expensive. In this thesis the use of simulation is recommended for decision making from the very start. To achieve this the description of time dependent behavior is also defined at an interdisciplinary level. Natural language is used to describe the processes as defined in the PROPER model at each aggregation stratum. These descriptions enrich the problem formulation phase with in-depth knowledge of the time dependent behavior of the system. Like the other disciplines, simulation evolved as a specialist discipline. In order to preserve a direct connection with the process descriptions of the PROPER model, these natural language process descriptions are translated into an object oriented Process Description Language PDL. This language can be implemented in any object oriented software environment. It is here implemented in the Borland Delphi platform that is based on the programming language Pascal. The implementation is called TOMAS: ""Tool for Object oriented Modeling And Simulation"". TOMAS is completely object oriented and fully complies with the ""Process Interaction"" implementation of the Discrete Event System Specification method (DEVS). In order to support the growing level of detail of the PROPER model during a design project, TOMAS also supports distributed simulation by offering an open event scheduling mechanism and communication between models at different aggregation strata. Finally the use of PROPER, PDL and TOMAS is illustrated with an already finished complex project: the design of the first automated container terminal in Rotterdam. It is shown that the use of this approach would have led to a clear and complete objective definition and would have warned the project participants in an early stage for a mismatch between expected and real operational performance. This approach will not automatically lead to improved logistic designs, but it does contribute to a better correspondence between expectations and reality.","systems approach; logistics; process interaction simulation","en","doctoral thesis","Delft University Press","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:924614e6-0579-4b3b-b950-753bd918d115","http://resolver.tudelft.nl/uuid:924614e6-0579-4b3b-b950-753bd918d115","A disturbance management approach to improving the performance of batch process operations","Schumacher, J.","Weijnen, M.P.C. (promotor)","2003","Scheduling of batch processes is a complex task, given the discontinuous nature of batch processes and their particular process characteristics, equipment-dependencies and sequence-dependencies. It has a major influence on the ultimate performance of batch processes. However, the scheduled performance is often not realized because production is not in conformance with the schedule. Disturbances such as production overruns or rush orders result in differences between the schedule and its realization, necessitating reactive scheduling. Therefore, in situations where the scheduling problem is complex and the impact of disturbances is large, scheduling optimization fails to contribute to improving the performance of batch processes. In this thesis an approach to disturbance management is presented. This approach will help to improve the performance of batch operations in those complex situations where the impact of disturbances is large. It is aimed at reducing disturbances and reducing their (negative) consequences. The approach that has been developed falls into two phases, an operational analysis phase and an operational improvement phase. In the analysis phase knowledge on disturbances is generated. To achieve the aims of disturbance management, this knowledge can be used in both initial and reactive scheduling as well as in reconsidering the position of the scheduling department in the company and in finding the causes of disturbances. Practical relevance and applicability of the approach are ensured by case studies in industrial practice and consultation of experts.","process industry; batch processes; production scheduling; disturbance management","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:85709836-d345-4082-a02e-9c1e95a91fed","http://resolver.tudelft.nl/uuid:85709836-d345-4082-a02e-9c1e95a91fed","Embedding data and task parallelism in image processing applications","Soviany, C.","Sips, H.J. (promotor); van Vliet, L.J. (promotor)","2003","","Image processing; data parallelism; task parallelism","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:946d413c-9063-48d5-bafc-512f94a6fa85","http://resolver.tudelft.nl/uuid:946d413c-9063-48d5-bafc-512f94a6fa85","Development of a miniaturized total chemical analysis system for real-time monitoring of fermentations","Van Guijt, R.M.","Van Dedem, G.W.K. (promotor)","2003","","Miniaturized total analysis system; microfluidics; process monitoring","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:192ef6c9-3bb1-45a0-a534-bbb100d4b8df","http://resolver.tudelft.nl/uuid:192ef6c9-3bb1-45a0-a534-bbb100d4b8df","Ammoxidation of Halogentoluenes for the Production of Halogen substituted Benzonitriles","Coppelmans, J.W.; Van der Hel, A.I.; Lukkien, V.; Meulendijks, R.","","2003","Document(en) uit de collectie Chemische Procestechnologie","ammoxidation; multi purpose; chlorobenzonitriles; dichlobenil; benzonitrile; Casoron G; ammoxidation; halogens; halogentoluenes; bromobenzonitriles; fluorobenzonitriles; Crompton; herbicides; conceptual; process design; quench","en","report","Delft University of Technology","","","","","","","2013-02-04","Applied Sciences","DelftChemTech","","","",""
"uuid:c61244bf-6531-401a-b0ea-dbb598fc7d77","http://resolver.tudelft.nl/uuid:c61244bf-6531-401a-b0ea-dbb598fc7d77","Renewal Processes and Repairable Systems","","Dekking, F.M. (promotor)","2003","In this thesis we discuss the following topics: 1. Renewal reward processes The marginal distributions of renewal reward processes and its version, which we call in this thesis instantaneous reward processes, are derived. Our approach is based on the theory of point processes, especially Poisson point processes. The idea is to represent the renewal reward processes and its version as functionals of Poisson point processes. Important tools we use are the Palm formula and the Laplace functional of Poisson point processes. The results are presented in the form of Laplace transforms. An application of the instantaneous reward processes to the study of traffic is given. Some asymptotic properties of the renewal reward processes are reconsidered. A proof of the expected-value version of the renewal reward theorem using the Tauberian theorem is given. A second order term in the expected-value version of the renewal reward theorem is obtained. Similar results for the instantaneous reward processes are investigated. Asymptotic normality of the instantaneous reward processes is proved. The covariance structure of renewal processes, which can be considered as a special case of renewal reward processes, is derived. As an addition, we study system reliability in a stress-strength model, where the amplitudes of stresses can be considered as rewards. We consider renewal and Cox processes as models for the occurrences of the stresses. Using our result on renewal reward processes we investigate the effect of dependence between stress and strengths on system reliability. 2. Integrated renewal processes The marginal probability density function of an integrated homogeneous Poisson Process is known in the literature. It is natural to generalize the integrated homogeneous Poisson process into integrated non homogeneous Poisson, Cox, and renewal processes. In this thesis we derive expressions for the marginal distributions of integrated Poisson and Cox processes using conditioning arguments, and derive the marginal distributions of integrated renewal processes using the theory of point processes. The results are presented in the form of Laplace transforms. Asymptotic properties of the integrated renewal processes are also investigated. An application to the study of traffic is given. 3. Total downtime of repairable systems An expression for the cumulative distribution function of the total downtime of a repairable system, which is regarded as a single component, under an assumption that the failure and the repair times of the system are independent has been derived by several authors using different methods. We use a different method (using point processes) to compute the distribution function of the total downtime. We also consider a more general situation where we allow dependence of the failure and the repair times of the system. The covariance structure and asymptotic properties of the total downtime for the independent case are also known in the literature. We derive the similar results for the dependent case. Examples are given to see the effect of dependence between the failure and the repair times on the total downtime. We also discuss the total downtime of repairable systems consisting of n 2 stochastically independent components. We derive an expression for the marginal distribution of the total uptime of the system for the case the failure and the repair times of each component are exponentially distributed. For arbitrary failure or repair times of the components we derive an expression for the mean of the total uptime.","Poisson point processes; renewal processes; repairable systems","en","doctoral thesis","Delft University Press","","","","","","","","Information Technology and Systems","","","","",""
"uuid:8eb19b59-ed53-41a7-92a9-00e0fd3591ba","http://resolver.tudelft.nl/uuid:8eb19b59-ed53-41a7-92a9-00e0fd3591ba","Knowledge transfer in water management: A communication perspective","Bots, P.W.G.; Rozemeijer, M.J.C.","","2003","","knowledge transfer; knowledge management; share information; actor analysis; template for process design","en","report","Delft Cluster","","","","","","","","","","","","",""
"uuid:261e7eca-e195-4de0-a6e5-17b1377af0e9","http://resolver.tudelft.nl/uuid:261e7eca-e195-4de0-a6e5-17b1377af0e9","How to handle colored observation noise in large least-squares problems","Klees, R.; Ditmar, P.; Broersen, P.","","2003","An approach to handling colored observation noise in large least-squares (LS) problems is presented. The handling of colored noise is reduced to the problem of solving a Toeplitz system of linear equations. The colored noise is represented as an auto regressive moving-average (ARMA) process. Stability and invertability of the ARMA model allows the solution of the Toeplitz system to be reduced to two successive filtering operations using the inverse transfer function of the ARMA model. The numerical complexity of the algorithm scales proportionally to the order of the ARMA model times the number of observations. This makes the algorithm particularly suited for LS problems with millions of observations. It can be used in combination with direct and iterative algorithms for the solution of the normal equations. The performance of the algorithm is demonstrated for the computation of a model of the Earth’s gravity field from simulated satellite-derived gravity gradients up to spherical harmonic degree 300.","large least-squares problems; auto regressive moving-average process; noise power spectral density function; satellite gravity gradiometry","en","journal article","Springer","","","","","","","","Aerospace Engineering","Remote Sensing","","","",""
"uuid:1c308f4a-5727-4478-a109-9524ea22d74e","http://resolver.tudelft.nl/uuid:1c308f4a-5727-4478-a109-9524ea22d74e","Calibration support to the Generic Framework program","Gijsbers, P.J.A.; Solomatine, D.P.; Te Stroet, C.B.M.; Minnema, B.","","2003","","Calibration; process; techniques; model; groundwater; Generic Framework","en","report","Delft Cluster","","","","","","","","","","","","",""
"uuid:1a0cc9f1-fe69-4b95-97e6-b24d1805b389","http://resolver.tudelft.nl/uuid:1a0cc9f1-fe69-4b95-97e6-b24d1805b389","Algorithms for Separation of Secondary Surveillance Radar Replies","Petrochilos, N.L.R.","Dewilde, P. (promotor); Comon, P. (promotor); van Genderen, P. (promotor)","2002","Air Traffic Control (ATC) centers aim at ensuring safety of aircrafts cruising in their area. The information required to face this mission includes the data provided by primary and Secondary Surveillance Radar (SSR). The first one indicates the presence of an aircraft, whereas the second gives information on its identity and altitude. All aircrafts contain a transponder, which send replies to the secondary radar in a semi-automatic mode, indeed it is an exchange. The increase of the air traffic implies that in a near future the actual SSR radar will not be able to perform correctly, and that requires to improve the quality of the SSR radar. This thesis proposes a possible improvement of the SSR. We propose to replace at reception the rotating antenna by an antenna array to gain spatial diversity, in order to perform beamforming. Given the density of the traffic, high-resolution techniques are mandatory to separate the sources. This is a blind source separation problem, but unlike standard cases, the sources are sending packets (not continuously), the packets do not completely overlap (non-stationary situation), the alphabet is binary but not antipodal ({0, 1} instead of {+1,â1}). And the carrier frequencies are not identical. Among the problems to solve, two main issues are the non- synchronisation of the sources, and the non-calibration of the antenna. This thesis presents new contributions to this field, including the identifiability of parameters and related Cramer-Rao bounds, and the design of receiver algorithms taking into account the specific encoding of the data (such as the MDA and the ZCMA algorithms presented herein). The performance of these algorithms is tested by extensive computer simulations as well as actual measurements; the setup of the experimental platform is also part of the thesis framework.","array signal processing; source separation; secondary surveillance radar","en","doctoral thesis","DUP Science","","","","","","","","Applied Sciences","","","","",""
"uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","http://resolver.tudelft.nl/uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","Dynamic Optimization in Business-wide Process Control","Tousain, R.L.","Bosgra, O.H. (promotor); Backx, A.C.P.M. (promotor)","2002","The chemical marketplace is a global one with strong competition between man- ufacturers. To continuously meet the customer demands regarding product quality and delivery conditions without the need to maintain very large stor- age levels chemical manufactures need to strive for production on demand. In this thesis we research how market-oriented production can be realized for the particular class of multi-grade continuous processes. For this class of processes production on demand is particularly challenging due to the the complex trade- off between performing costly and time-consuming changeovers and maintaining high storage levels. The first requirement for market-oriented production is that production management cooperates with purchasing and sales management. We propose the use of a scheduler as a decision support system in a cooperative organization constituted by these players. In such a scheduler, decision making is represented using decision variables and their effect on the company-wide objective, which is chosen to be the added value of the company, is modeled. The scheduler then selects a decision strategy that is optimal with respect to the objective and presents this strategy to the decision makers who use it to base their actual decision taking on. The company-market interaction is modeled using a transaction-based mod- eling framework. Therein not the actual market behavior is modeled but the expected effect of the interaction of the company with the market. Two types of transactions can be modeled in this framework: orders, which result from contracts with suppliers and customers, and opportunities, which express the expected sales and purchases. Two different approaches to the modeling of production decisions are taken, the choice of which depends largely on the im- plementation of the process control hierarchy that is assumed. In the first approach, production management and control is performed by a single level controller and the control decisions are the minute to minute manipulation of the valves. This approach is academically interesting, though practically in- tractable due to the combination of long horizons and fast sampling times. In the second approach the process control hierarchy consists of a scheduling layer at which it is determined what products will be produced when, and a process control layer which determines how this production is realized. This approach is taken in the rest of the thesis.","chemical processes; optimization; supply chain","en","doctoral thesis","Delft University Press","","","","","","","","Design, Engineering and Production","","","","",""
"uuid:107c3759-b665-4a5f-bf2a-7d4a89402d52","http://resolver.tudelft.nl/uuid:107c3759-b665-4a5f-bf2a-7d4a89402d52","Schatting bodemligging brandingszone uit Argus beelden","Aarninkhof, S.G.J.; Kessel, T. van","","2002","","beeldverwerking; image processing; morfodynamische modellen; morphodynamic models; bodemligging; bed level; brandingszone; surf zone; kustlijnontwikkeling; coastline development","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:944777a3-5c98-42f6-9043-df890e8a363a","http://resolver.tudelft.nl/uuid:944777a3-5c98-42f6-9043-df890e8a363a","Ontology-based modeling of product functionality and use part 2: Considering use and unintended behavior","Van der Vegte, W.F.; Kitamura, Y.; Mizoguchi, R.; Horváth, I.","","2002","the function-behavior representation language FBRL was originally devised for modeling and knowledge management of intended product behavior. This paper explores its potential for application to other-than-intended behavior in a use context, introducing consideration of the user and the environment. We found that slightly adapted building blocks from as-is FBRL can be applied to behavior that is unintended and/or not performed by the product. To support anticipation of unintended behavior in design, special attention has to be paid to the knowledge that connects product functions, user actions and environment behavior. We distinguish typical and atypical forms of unintended use. Some forms of typical unintended use can be directly derived from the intended use. Yet, most forms of unintended use require additional knowledge, e.g., from user observations. To include such knowledge, subsequent effort has to be put into its systematization","ontologies; product design; function modelling; unintended behaviour; use process","en","conference paper","University ofZielona Góra","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:63d10996-943f-4ee6-9748-34a6344e1fee","http://resolver.tudelft.nl/uuid:63d10996-943f-4ee6-9748-34a6344e1fee","Evaluation of nourishments at Egmond with Argus video monitoring and Delft3D-MOR","Nipius, L.J.","","2002","","zandtransportmeting; sand transport measurement; beeldverwerking; image processing; zandtransport; sand transport; zandsuppletie; sand nourishment; kustmorfologie; coastal morphology","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:d54583cf-0eac-44b8-b7a6-f2c46d96fbda","http://resolver.tudelft.nl/uuid:d54583cf-0eac-44b8-b7a6-f2c46d96fbda","Consideration and modeling of use processes in computer-aided conceptueal design: A state of the art review","Van der Vegte, W.F.; Horváth, I.","","2002","If conceptual modeling and simulation of consumer durables could include consideration of use processes, designers could more successfully anticipate the interaction ofproducts with users in a use environment. This is the basic idea behind our research into computer-aided modeling and forecasting product-use processes. This survey investigates the current state ofthe art, that forms the basis for companies or researchers developing systems in this area. It includes overviews of(a) definitions (b) relevant achievements in this field, and (c) research in related areas, such as ergonomics, human-computer interaction and machine design. In recent years, there has been no significant development ofnovel, dedicated use-process models. Current models represent discrete actions, observed from use of existing products or prescribed by the designer. Simulation techniques are applied to deal with continuous changes, predicting the behavior ofthe product and its environment, but typically not the user. Yet, promising techniques for simulating humans are emerging, for instance in computer-graphics animation. New integrated techniques for simulation open the way to quantitative and more accurate predictions of the use process, but they cannot handle the multiplicity of possible use processes resulting from different users in different environments. In this respect, the development of use-process models with increased knowledge content andfacilitiesfor integration with simulations can give a solution.","survey; review; use process modelling; life cycle representation","en","journal article","Society for Design and Process Science","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:d34fc1a6-9afb-4d18-a322-b1b943710b82","http://resolver.tudelft.nl/uuid:d34fc1a6-9afb-4d18-a322-b1b943710b82","Multi-anode linear SDDs for high-resolution X-ray spectroscopy","Sonsky, J.","Van Eijk, C.W.E. (promotor)","2002","Radiation detectors are used in a variety of fields to sense X-rays and y-rays, visible, UV and IR photons, neutrons or charged particles. With their help, advanced medical diagnostics can be performed (e.g. X-ray radiography, computed tomography, fluoroscopy), material research can undergo a rapid development (e.g. X-ray microanalysis, X-ray diffraction, Mauer spectroscopy and element imaging), space and its evolution (astronomy and astrophysics) can be explored through observation of X-rays and y-rays emitted by astronomical objects, etc. Semiconductor detectors, with silicon being the leading material, are used in many of the abovementioned applications. This thesis describes the development of a special type of silicon detector for 1D-position sensitive X-ray spectroscopy; the multi-anode silicon drift detector (SDD). The developed prototype is an ideal candidate for X-ray diffraction applications. Moreover, due to a high flexibility of its design, SDDs can be utilized in many other applications where sensing of X-rays in the range from 200 eV to 20 keV with an ultimate energy resolution is needed, e.g. X-ray fluorescence spectroscopy, X-ray holography and synchrotron experiments.","position-sensitive detector; X-ray spectroscopy; electron cloud confinement; silicon detector processing; low temperature processing; p-JFET integration","en","doctoral thesis","Delft University Press","","","","","","","","Interfaculty Reactor Institute","","","","",""
"uuid:a15d866a-f3dd-4143-a232-adff09a3ce54","http://resolver.tudelft.nl/uuid:a15d866a-f3dd-4143-a232-adff09a3ce54","Space-time multiuser receivers for wideband code division multiple access","Hernandez, M.A.","Arnbak, J.C. (promotor); Prasad, R. (promotor)","2002","","Multiuser receivers; direct-sequence CDMA; space-time processing","en","doctoral thesis","Delft University Press","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:78bf20c8-aa8f-4464-a6f4-5572e28e54e9","http://resolver.tudelft.nl/uuid:78bf20c8-aa8f-4464-a6f4-5572e28e54e9","Contrast enhancement for depolarizing radar targets","Moisseev, D.N.","Ligthart, L.P. (promotor)","2002","","Radar; remote sensing; radar polarimetry; clutter suppression; target enhancement; doppler processing; atmospheric radar","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:1093e517-2a6a-4ab7-acb7-898febb9ef1b","http://resolver.tudelft.nl/uuid:1093e517-2a6a-4ab7-acb7-898febb9ef1b","STORM-Rhine, main report: Executive summary","Heun, J.C.; de Groen, M.M.; Werner, M.","","2002","","simulation game; roleplay; participatory decision-making; river functions; stakeholder interests; river engineering; river management; floodplain management; institutional arrangements; biological; hydraulic and morphological processes; spatial planning and land use","en","report","Delft Cluster","","","","","","","","","","","","",""
"uuid:d34f4f3f-6f1f-44df-843b-88e1ea966c7c","http://resolver.tudelft.nl/uuid:d34f4f3f-6f1f-44df-843b-88e1ea966c7c","Changes in steel can be heard: The knock-on effect of flipping crystal lattices","Van de Graaf, A.","Van Bohemen, S. (contributor); Den Ouden, G. (contributor)","2002","Cracks in welds are a major problem in the steel processing industry. One of the causes is a change in the microstructure that occurs if steel cools too rapidly during the welding process. Researcher Stefan van Bohemen of the Applied Sciences faculty at TU Delft demonstrates how this change is accompanied by the production of high-frequency mechanical vibrations, in other words, acoustic emission. The production of sound, however slight, turns out to be a good indicator for tracing faults in welds. In the second half of his research term, van Bohemen will focus on mapping these changes in steel. This is essential in order to be able to determine the optimum conditions not only during the welding process, but also during the production of unique, new steel types such as TRIP steel, which appears to be destined as the new material for the automotive industry. As the research is considered to be of practical relevance it is supported by Dutch industry through the Netherlands Institute of Metals Research (NIMR).","steel processing industry; cracks; welds","en","journal article","Delft University of Technology","","","","","","","","","","","","",""
"uuid:3e13afdd-5cba-42d9-a7b9-557d6c987457","http://resolver.tudelft.nl/uuid:3e13afdd-5cba-42d9-a7b9-557d6c987457","A canonical process for estimation of convex functions: The ""invelope"" of integrated Brownian motion +t4","Groeneboom, P.; Jongbloed, G.; Wellner, J.A.","","2001","A process associated with integrated Brownian motion is introduced that characterizes the limit behavior of nonparametric least squares and maximum likelihood estimators of convex functions and convex densities, respectively. We call this process “the invelope” and show that it is an almost surely uniquely defined function of integrated Brownian motion. Its role is comparable to the role of the greatest convex minorant of Brownian motion plus a parabolic drift in the problem of estimating monotone functions. An iterative cubic spline algorithm is introduced that solves the constrained least squares problem in the limit situation and some results, obtained by applying this algorithm, are shown to illustrate the theory.","convex function; estimation; Gaussian process; integrated Brownian; motion; least squares","en","journal article","Institute of Mathematical Statistics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:5c300a66-c108-4dd3-a90b-c059f3950df7","http://resolver.tudelft.nl/uuid:5c300a66-c108-4dd3-a90b-c059f3950df7","Ebb and flood channel systems in the Dutch tidal waters","Van Veen, J.","Van der Spek, A. (contributor); Stive, M. (contributor); Zitman, T. (contributor)","2001","This paper should be considered as Van Veen's most important publication since his thesis. It summarizes the results of 20 years of intensive study of estuarine and tidal- basin morphodynamics in The Netherlands. The paper is testimony to Van Veen' s keen observational and artistic skills. His approach is nearly ""Da Vincian"" in the sense that he is not only a fascinated but also sharp observer of nature and tries to capture the essentials of the dynamic behavior of complex coastal systems in apparently simple sketches. Many of the natural systems that Van Veen studied have been regulated since; thus, this paper contains a set of irreplaceable, high- quality observations on the natural dynamics of tidal systems. Along with Robinson' s (1960) paper on ebb-flood channel systems, it forms an excellent introduction to the study of channel dynamics in estuaries, tidal inlets, and tidal basins. Unfortunately, Van Veen's paper was published in Dutch, with only a brief summary in English. Understandably though, the paper has received very limited recognition in the international literature. The present publication is a tribute to Professor Kees d'Angremond, who retired on November 28, 2001, from the chair of Coastal Engineering at Delft University of Technology. We have seized this occasion to publish an English version of Van Veen' s paper. The translation is annotated in order to put it in the perspective of our present-day ideas on coastal dynamics.","Coastal processes","en","journal article","Delft University of Technology","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:c226bf0a-79d6-4357-80ff-c68c4d5f0416","http://resolver.tudelft.nl/uuid:c226bf0a-79d6-4357-80ff-c68c4d5f0416","Approximation formulae for sand transport by currents and waves and implementation in DELFT-MOR","Rijn, L.C. van; Roelvink, J.A.; Horst, W. ter","","2001","","morfodynamische modellen; morphodynamic models; zandtransportformules; sand transport formulae; zandtransport; sand transport; sedimenttransportprocessen; sediment transport processes; bodemtransport; bedload transport","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:8111ead7-d115-40f8-a972-5669cf4247c6","http://resolver.tudelft.nl/uuid:8111ead7-d115-40f8-a972-5669cf4247c6","Automatische bepaling van geometrie-oplossingen van Argusbeelden: Validatie van Autogeom en toelichting op gebruik","Kessel, T. van; Aarninkhof, S.G.J.","","2001","","beeldverwerking; image processing; meetsystemen; monitoring systems; kustmorfologie; coastal morphology; kustlijnontwikkeling; coastline development","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:2d3bd986-5fbf-4623-954a-b38b6566cba9","http://resolver.tudelft.nl/uuid:2d3bd986-5fbf-4623-954a-b38b6566cba9","Kustlijn detectie uit Argus videobeelden: Gebruikershandleiding Intertidal Beach Mapper","Aarninkhof, S.G.J.; Nipius, L.","","2001","","beeldverwerking; image processing; meetsystemen; monitoring systems; kustlijn; coastline; waterstanden; water levels; kustlijnontwikkeling; coastline development","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:c7396974-36ce-4201-a951-b8c3873e2153","http://resolver.tudelft.nl/uuid:c7396974-36ce-4201-a951-b8c3873e2153","Towards a unified description of product-related processes","Van der Vegte, W.F.; Vergeest, J.S.M.; Horváth, I.","","2001","To increase the effectiveness of computer support in development and production of durable industrial products (artifacts), three problems still need to be solved: (a) integration ofaspect models used in artifact development, (b) integration of the aspect models used in process planning andprocess representations, (c) integration of(a) and (b). This seems to be difficult, since different application fields, representation techniques and information contents have to be combined. It means that instead ofthe pure artifact andprocess models, we have to develop artifact-process models. This article focuses on the integration offour kinds oftypical life-cycle processes: (1) design (mental creation processes), (2) producing (physical creation processes), (3) operation processes (internal behavioral processes), and (4) use processes (external behavioral processes). In order to achieve a sufficiently high level offormalization in the computermediatedhandling ofvarious processes, this article introduces a set-theory basedrepresentation. This representation makes it possible to stereotype the observed or forecasted processes, largely independent of their content. The applicability of the approach in real-life cases is demonstrated by three examples. Further research is oriented to the integration ofprocess modeling with artifact modeling, as well as application to more complex cases.","process modelling; life cycle representation","en","journal article","Society for Design and Process Science","","","","","","","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:794c4834-bd46-423c-b7bd-9e9e4561a018","http://resolver.tudelft.nl/uuid:794c4834-bd46-423c-b7bd-9e9e4561a018","3D Simulation and Visualization Studies of Flow in Porous Media","Heijs, A.W.J.","de Leeuw, S.W. (promotor)","2001","","computed tomography; 3D image processing; computer simulation; flow in porous media; scientific visualization; information visualization","en","doctoral thesis","Shaker Publishing BV","","","","","","","","Applied Sciences","","","","",""
"uuid:78cdab33-eeb2-424a-89eb-11c5f6fcd1dd","http://resolver.tudelft.nl/uuid:78cdab33-eeb2-424a-89eb-11c5f6fcd1dd","Knowledge Management in Courseware Development","Van Aalst, J.W.","Dietz, J.L.G. (promotor); Kempen, G.A.M. (promotor)","2001","Educational multimedia software development is a special case of software development, partly because of the large variety of disciplines involved, partly because the use of the multiple media sets its own demands to the design and realization of educational software (courseware). In this thesis we investigate the improvement of the maturity of the courseware development process. To this end, we employ a particular kind of knowledge management to facilitate project development teams in their knowledge needs. The results indicate that this particular kind of knowledge management can indeed contribute significantly to the maturity of the development process.","educational multimedia; software process improvement; knowledge management","en","doctoral thesis","Delft University Press","","","","","","","","Information Systems and Technology","","","","",""
"uuid:a44d7d92-ccb5-4de2-9f3f-49465194d02a","http://resolver.tudelft.nl/uuid:a44d7d92-ccb5-4de2-9f3f-49465194d02a","Quantifying the Qualitative Design Aspects","Durmisevic, S.; Ciftcioglu, Ö.; Sariyildiz, S.","","2001","","Qualitative Design Data; Information Processing; Soft Computing; Knowledge Modeling; Neuro-Fuzzy Network","en","conference paper","","","","","","","","","Architecture","","","","",""
"uuid:62d7a071-559f-4a7b-b7bc-dc6075158599","http://resolver.tudelft.nl/uuid:62d7a071-559f-4a7b-b7bc-dc6075158599","Automatische bepaling beeldverschuiving Argus: Gebruikershandleiding (versie 2)","Kessel, T. van; Aarninkhof, S.G.J.","","2000","","beeldverwerking; image processing","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:97767055-f985-446c-aac8-7d80db65c071","http://resolver.tudelft.nl/uuid:97767055-f985-446c-aac8-7d80db65c071","Video-based monitoring of the Egmond beach- and shoreface nourishments: Evaluation of the 1999 nourishments with the help of the Argus video system","Caljouw, M.","","2000","","strandverbetering; beach improvement; zandsuppletie; sand nourishment; kustlijnontwikkeling; coastline development; morfodynamische modellen; morphodynamic models; beeldverwerking; image processing; Noord-Holland","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:8ffa77ec-97fc-45ba-9f28-1454cad4d3bb","http://resolver.tudelft.nl/uuid:8ffa77ec-97fc-45ba-9f28-1454cad4d3bb","Automatische aanmaak van stapelbeelden: Gebruikershandleiding ArgusStackTool (AST)","Aarninkhof, S.G.J.","","2000","","beeldverwerking; image processing; waterstandmeting; water level measurement","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:24d5e723-c7e6-404b-b446-7fd49a0b6b3f","http://resolver.tudelft.nl/uuid:24d5e723-c7e6-404b-b446-7fd49a0b6b3f","Automatische aanmaak van compositiebeelden: Gebruikershandleiding ArgusMergeTool (AMT)","Aarninkhof, S.G.J.","","2000","","beeldverwerking; image processing; kustmorfologie; coastal morphology","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:4f9737fc-28f6-46e2-93b6-1e016bab08af","http://resolver.tudelft.nl/uuid:4f9737fc-28f6-46e2-93b6-1e016bab08af","Analytical model for wave-related sediment transport","Bosboom, J.","","1999","","stromingsmodellen; flow models; sedimenttransportprocessen; sediment transport processes; zandtransportmodellen; sand transport models","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:67eb1cc0-fd0f-443f-87a4-96e59c114af5","http://resolver.tudelft.nl/uuid:67eb1cc0-fd0f-443f-87a4-96e59c114af5","Softening in the blast furnace process: Local melt formation as the trigger for softening of ironbearing burden materials","Bakker, T.","Heerema, R.H. (promotor)","1999","","iron-metallurgy; blast furnace process; softening-melting; solidus relations","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:ef2d0c17-8562-46f0-9ac3-a2a3e001d1f8","http://resolver.tudelft.nl/uuid:ef2d0c17-8562-46f0-9ac3-a2a3e001d1f8","Coordinating collaborative building design","Heintz, J.L.","Tzonis, A. (promotor)","1999","","architectural theory; design process; design practice; collaborative design; project management","en","doctoral thesis","","","","","","","","","Architecture","","","","",""
"uuid:5fe1ffd5-ff40-4602-9178-1b1b073a2e63","http://resolver.tudelft.nl/uuid:5fe1ffd5-ff40-4602-9178-1b1b073a2e63","Document interpretation applied to utility maps","Schavemaker, J.G.M.","Backer, E. (promotor)","1999","","document interpretation; image processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:5c63f76a-532a-4de2-9b5a-2bbc1275613e","http://resolver.tudelft.nl/uuid:5c63f76a-532a-4de2-9b5a-2bbc1275613e","The maritime pilot at work: Evaluation and use of a time-to-boundary model of mental workload in human-machine systems","Van Westrenen, F.C.","Hale, A.R. (promotor)","1999","People have proven to be flexible and reliable in many control tasks, such as car driving and ship navigation. Much effort has been invested into automating these tasks but the benefits have so far been limited and the problems enormous. Other tasks, such as plant control, where complicated systems are tightly coupled to obtain large volumes of high quality products with very strict production demands show a much higher level of automation. This automation makes control of these complicated systems possible, relieving the controller of many tasks, improving the production quality, and reducing the operators workload. In both cases, in order to fit the work to the operator and to ensure that his capacity is used to the maximum, that system safety is optimal, and that working conditions meet human long term needs, extensive knowledge of operator abilities and limitations is required. In this study the relationship between process characteristics and monitoring behaviour was studied in order to learn more about operator control behaviour. This was done in two situations: a relatively complex process simulator in which the operator had to perform a rather complex and realistic control task, and in the real situation of maritime pilots on board sea ships. The operator's monitoring activity was measured using mental workload measures. The hypothesis was that workload is a linear function of the time-to-contact or time-to-boundary of each of the process variables. An assumption was that the mental orkload is the result of sampling and decision making and is proportional to the frequency of this cycle (sampling and decision making). The technique used to record mental workload was the use of heart-rate, in particular the use of heart-rate variability. The heart-rate is not a constant, but fluctuates about 10% around the mean heart-rate. It is known from literature that an increase of mental workload coincides with a decrease of the heart-rate variability (HRV), and this decrease of HRV was used as an indication of increased workload, which in turn was an indication for the operators monitoring behaviour. This hypothesis was tested using a simulated process (DURESS): a simulation of a hot-water production-plant. The subjects had to produce water of a certain temperature and quantity by carefully adjusting the controls of the simulator. During the experiment the heart-rate was recorded, together with all his control activities and his production performance. The results show that the time-to-boundary (TTB) approach is successful in explaining a large part of the operators monitoring behaviour: the TTB measure correlated well with the HRV during the control phase, which confirms the theory on monitoring behaviour. This means that there is a direct relationship between the time left for the operator to intervene and his sampling frequency. The second experiment was a similar study but now with a real task: maritime pilots doing their normal work. Four Rotterdam pilots participated in an experiment in which they were recorded on video, their heartbeat was recorded, and their voyage was logged on maps, all during their normal work. Twenty-five voyages were recorded, with a large range of types and sizes of ships and destinations. This experiment provided a series of results. The element that was considered most important was the relationship between the TTB that was the result of the fairway layout and the mental workload. The correlation functions between HRV and MTB were largely as predicted by the theory. These correlation functions had maximum at about delta x=-0.5km for all location, meaning that the minimum level of HRV was reached before the TTB had reached its minimum, i.e. the workload precedes the critical moment, or, in other words, the pilot makes a decision about 0.5km before the situation becomes critical. Taking the results of the DURESS experiment and the maritime-pilot experiment together gives very strong support to the theory on the relationship between monitoring and time-to-boundary in a complex control task. This relationship was inferred from the relationship between heart-rate variability (HRV) and the minimum time-to-boundary (MTB). In addition to these main results various conclusions are drawn with respect to the recording techniques used, pilotage, and shore-based radar support.","workload; task analysis; navigation; process control; pilotage; heart-rate variability; heart rate","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:58399d5d-a427-421c-9ddc-1a4dab4aa915","http://resolver.tudelft.nl/uuid:58399d5d-a427-421c-9ddc-1a4dab4aa915","Recovery of Hydrocarbon Products from a 5 Gsm3/a Natural Gas Stream","Ei1ers, R.F.; De Lathouder, K.M.; Law, J.R.; Tulleken, B.A.","","1999","","Natural Gas; Gas Processes; Plate-fin Heat Exchanger; Molecular Sieve; Turbo-expander; Distillation","en","report","Delft University of Technology","","","","","","","","Applied Sciences","DelftChemTech","","","",""
"uuid:55d2a8d0-7118-4a07-a102-327e2cbc1cc9","http://resolver.tudelft.nl/uuid:55d2a8d0-7118-4a07-a102-327e2cbc1cc9","Adjusting Life Cycle Assessment Methodology for Use in Public Policy Discourse","Bras-Klapwijk, R.M.","Thissen, W.A.H. (promotor)","1999","","lifecycle assessment; product evaluation; policy analysis; public policy process; discourse paradigm; participatory; frames; PVC; chlorine","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:f2947dec-4272-4fbe-95c0-cb3ad4a664bf","http://resolver.tudelft.nl/uuid:f2947dec-4272-4fbe-95c0-cb3ad4a664bf","Standaardmethode schade- en slachtofferbepaling: Definitiestudie","Groot, S.","","1999","","schade; damage; risico-analyse; risk analysis; gegevensverwerking; data processing","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:e2b2a71f-c701-44f7-8bfe-7a899c43180a","http://resolver.tudelft.nl/uuid:e2b2a71f-c701-44f7-8bfe-7a899c43180a","Robust option replication for a Black-Scholes model extended with nondeterministic trends","Schoenmakers, J.G.M.; Kloeden, P.E.","","1999","Statistical analysis on various stocks reveals long range dependence behavior of the stock prices that is not consistent with the classical Black and Scholes model. This memory or nondeterministic trend behavior is often seen as a reflection of market sentiments and causes that the historical volatility estimator becomes unreliable in practice. We propose an extension of the Black and Scholes model by adding a term to the original Wiener term involving a smoother process which accounts for these effects. The problem of arbitrage will be discussed. Using a generalized stochastic integration theory [8], we show that it is possible to construct a self financing replicating portfolio for a European option without any further knowledge of the extension and that, as a consequence, the classical concept of volatility needs to be re-interpreted.","black and scholes option price theory; long-range dependence; stochastic analysis of square zero variation processes; portfolios; arbitrage","en","journal article","Hindawi Publishing Corporation","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:ba58aebb-1edb-462e-a0db-5ec96a37fa3f","http://resolver.tudelft.nl/uuid:ba58aebb-1edb-462e-a0db-5ec96a37fa3f","Isotonic inverse estimators for nonparametric deconvolution","Van Es, B.; Jongbloed, G.; Van Zuijlen, M.","","1998","A new nonparametric estimation procedure is introduced for the distribution function in a class of deconvolution problems, where the convolution density has one discontinuity. The estimator is shown to be consistent and its cube root asymptotic distribution theory is established. Known results on the minimax risk for the estimation problem indicate the estimator to be efficient","convex minorant; cube root asymptotics; isotonic estimation; empirical process","en","journal article","Institute of Mathematical Statistics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","","",""
"uuid:2308c2de-f90b-49d8-a631-4f88b1f16ed8","http://resolver.tudelft.nl/uuid:2308c2de-f90b-49d8-a631-4f88b1f16ed8","Low-mobility transport of coarse-grained material: Inventory","Mosselman, E.; Akkerman, G.J.","","1998","","sedimenttransport in rivieren; sediment transport in rivers; stochastische processen; stochastic processes; uitzeving; sediment sorting","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:06846507-b510-4f56-ab84-9f7060345fe3","http://resolver.tudelft.nl/uuid:06846507-b510-4f56-ab84-9f7060345fe3","Neutron Capture Gamma-Ray Spectroscopy With 107AG, 109AG and 115In, for Parity Violation and Nuclear Structure Studies","Zanini, L.","Postma, H. (promotor)","1998","","neutron capture; gamma-cascade process; spins and parities of resonances; parity violation","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:017175d5-4932-4967-bab4-ae6c582900a6","http://resolver.tudelft.nl/uuid:017175d5-4932-4967-bab4-ae6c582900a6","Coastal Dynamics","Bakker, W.T.","Roelvink, J.A. (contributor); Steetzel, H.J. (contributor); Bliek, A. (contributor); Rakhorst, H.D. (contributor); Roelse, P. (contributor)","1998","This book deals on ""Coastal Dynamics"", which will be defined in a narrow sense as a mathematical theory, which starts from given equations of motion for the sediment, which leads with the continuity equation and given boundary conditions to a calculated (eventually schematized) coastal topography, which is generally a function of time. This is clearly analogous to aero- and hydrodynamics, thermo-dynamics, hydrology and other related fields. The subject of this book, however, covers only a specific part of the Coastal Dynamics. It is based upon the notion of old masters, that for a manager the back-side of a cigar-box should be large enough to evaluate all the information and anti-information which is poured over him. For instance: statements, based upon high-tech number-crunching can find a sad end when those do not match large-scale continuity. Think for instance of tidal computations, in which time-varying boundary conditions (shoals which emerge above water level during a part of the tide) are not reproduced accurately enough. The positive role of refined numerical techniques on solving problems in coastal dynamics should be stressed. However, one will not find much about it in this book. Emphasis is on physics, rather than on mathematics. It is meant for coastal managers, to inspire them, to enable them to put sensible questions and, if necessary, to say: ""Oh, no sir"". The theory would be sterile without a consideration about the validity of the equation of motion, so much the more, while these equations of motion are not as evident, straightforward and single- valued as for instance the Euler - equations in hydrodynamics. The Newtonian laws are not quite sufficient for the computation of the sediment motion, because the motion of grains is subject to stochastic processes as the shape of the grains, the shape of the granular bed surface, turbulence, irregular wave motion etc. In modern sophisticated computer models a physical approach going far into details is possible, this includes the consideration of water and sand apart from each other, investigation of turbulent and viscous forces on the grains, and calculation of the sediment motion.","coastal morphology; coastal processes","en","report","TU Delft, Section Hydraulic Engineering","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:1d58e4e5-4a00-4365-a050-6808fcf2de82","http://resolver.tudelft.nl/uuid:1d58e4e5-4a00-4365-a050-6808fcf2de82","Fundamentals of Image Processing","Young, I.T.; Gerbrands, J.J.; Van Vliet, L.J.","","1998","","digital image processing; digital image analysis","en","book","Delft University of Technology","9075691017","","","","","","","Applied Sciences","Imaging Science and Technology","","","",""
"uuid:20a1f3e6-4015-4e44-8be0-cf66c5bba8c4","http://resolver.tudelft.nl/uuid:20a1f3e6-4015-4e44-8be0-cf66c5bba8c4","Designing electronic document infrastructures","Uijlenbroek, J.J.M.","Sol, H.G. (promotor)","1997","","electronic document management; document processing; document infrastructure","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:b3545c16-0807-479e-ad68-57e1e3091d84","http://resolver.tudelft.nl/uuid:b3545c16-0807-479e-ad68-57e1e3091d84","Sand flume experiments with graded sediment: Inception report","Mosselman, E.","","1997","","experimenteel onderzoek; experimental research; onderzoekgoten; test flumes; sedimenttransportprocessen; sediment transport processes; gegradeerde sedimenten; graded sediments","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:1f06f34a-378a-49a6-9997-206321ddd242","http://resolver.tudelft.nl/uuid:1f06f34a-378a-49a6-9997-206321ddd242","The Interconnected Fluidized Bed reactor - for gas/solids regenerative processes","Snip, O.C.","van den Bleek, C.M. (promotor)","1997","","fluidized beds; hydrodynamics; solids circulation; gas-solids regenerative processes; regenerative desulfurization; sulfur capture; Interconnected Fluidized Bed","en","doctoral thesis","Delft University Press","","","","","","","","Applied Sciences","","","","",""
"uuid:fabe3f6c-e709-40f2-812d-934b9c571cdb","http://resolver.tudelft.nl/uuid:fabe3f6c-e709-40f2-812d-934b9c571cdb","Classical HPCN geared to application in industry","Verwer, J.; Loeve, W.; Snijdoodt, E.; ten Dam, A.","","1997","In The Netherlands as a result of a national HPCN initiative a Foundation HPCN was established in 1995. The purpose of this Foundation is to stimulate structural and lasting cooperation of universities, technological institutes and industry in economically relevant applications of HPCN. Projects have been selected and since the beginning of 1996 projects are being executed. In the present paper an overview is given of the eight projects that are being executed. The principles of the HPCN program in The Netherlands is illustrated for flow simulation projects in which the use of the most powerful existing computer servers is essential and made feasible for industry including SMEs. The approach is based on integration of local workstations with remote servers for information management and computing tasks.","applications of mathematics; computational fluid dynamics; computer networks; computerized simulation; distributed processing; environment pollution; multiple access; Netherlands; research projects; software tools; supercomputers; training simuolators","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:17ddfa7f-454f-41dd-b2c1-014640c3641f","http://resolver.tudelft.nl/uuid:17ddfa7f-454f-41dd-b2c1-014640c3641f","Overlapped Transform Coding of Images: Theory, Application and Realization","Heusdens, R.","Biemond, J. (promotor)","1997","","data compression; video coding; digital signal processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:8a113778-ebc9-4d2d-8a13-aa7c3a960850","http://resolver.tudelft.nl/uuid:8a113778-ebc9-4d2d-8a13-aa7c3a960850","A Study of Short Circuiting Arc Welding","Hermans, M.J.M.","den Ouden, G. (promotor)","1997","","short circuiting arc welding; process stability; weld pool oscillation","en","doctoral thesis","Delft University Press","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:64d90f40-d184-48a5-9068-b15d2e4b74d7","http://resolver.tudelft.nl/uuid:64d90f40-d184-48a5-9068-b15d2e4b74d7","Overzetten bodempeilingen Waal/Pannerdensch kanaal in GRID","Jonge, J.J. de","","1997","","gegevensverwerking; data processing; bodemligging; bed level; meting; measurement; interpolatie; interpolation; Pannerdens Kanaal; Waal","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:3adbd868-bb81-4cf7-bff5-474efe9cd857","http://resolver.tudelft.nl/uuid:3adbd868-bb81-4cf7-bff5-474efe9cd857","A survey of the NARSIM C/S middleware","Michiels, R.W.F.J.","","1996","This document describes the Client/Server system architecture of NARSIM - the NLR Air Traffic control Research Simulator- focussing on the socalled middleware: the infrastructure that enables all the seperate parts of the simulator to interact with each other. To show the full scope of the architecture, first a somewhat abstract architecture description is given, followed by more concrete technical implications of this architecture and the particular implementation chosen within NARSIM. Also some non-technical issues are discussed which are direct results from either the architecture or the specific implementation. Finally in the appendices a survey of the NARSIM C/S servers is given together with other system documentation, to give a better understanding of the environment in which the NARSIM C/S middleware operates. The most distinctive feature of the NARSIM Client/Server architecture is probably the perfect symmetry between clients and servers. An application cannot be labeled statically as either server or client: only with respect to a certain dialogue can one application be a labeled as a server and the other as a client. Initial experience with the new architecture has shown that this perhaps seemingly arbitrary view of clients and servers has some profound positive consequences on the architecture as a whole, such as a very low degree of complexity and thus ease of understanding and maintenance.","air traffic control; applications programs (computers); architecture (computers); client server systems; computer systems design; distributed processing; interprocessor communication; modularity; protocol (computers); simulators; Unix (operating system)","en","report","","","","","","","Campus only","","","","","","",""
"uuid:1dd32b47-4975-4750-a7e0-56a1ff73afaa","http://resolver.tudelft.nl/uuid:1dd32b47-4975-4750-a7e0-56a1ff73afaa","Designing organizational coordination","Eijck, D.T.T.","Sol, H.G. (promotor)","1996","","coordination; organizational management; process management","en","doctoral thesis","","","","","","","","","Technology, Policy and Management","","","","",""
"uuid:693a33cb-de73-4d45-8d0c-64e6f09678f1","http://resolver.tudelft.nl/uuid:693a33cb-de73-4d45-8d0c-64e6f09678f1","Telematica en informatietechnologie in het verkeer: Telematica and information technology in transports","Minderhoud, M.M.; Bovy, P.H.L.","","1996","","Conference 8525 telecommunication 9117 data processing 8655 traffic 0655 transport 1155 technology 3855 efficiency 5911 comfort 1379 passenger information 8581 driver information 8572 netherlands 8078 Traffic and transport planning (72) traffic control (7","nl","conference paper","COLLOQUIUM VERVOERSPLANOLOGISCH SPEURWERK, LEGMEERSTRAAT 62/2H, AMSTERDAM, 1058 NG, NETHERLANDS","","","","","","","","","","","","",""
"uuid:17dc2d69-07c1-49fd-8405-7138b3194b8d","http://resolver.tudelft.nl/uuid:17dc2d69-07c1-49fd-8405-7138b3194b8d","Electrostatic spray deposition of thin layers of cathode materials for lithium battery.","Chen, C.H.; Kelder, E.M.; Jak, M.J.G.; Schoonman, J.; Chowdari, B.V.R.","","1996","","Battery Secondary cell Lithium Electrostatic spraying Electrostatic deposition Thin film Electrode material Production process Solid electrolyte storage battery Lithium oxide Cobalt oxide Diffusion coefficient Chemical diffusion Diffraction pattern X ray","en","journal article","","","","","","","","","","","","","",""
"uuid:f47a8adc-4e70-4f97-bedb-1a736e35a09e","http://resolver.tudelft.nl/uuid:f47a8adc-4e70-4f97-bedb-1a736e35a09e","Quantitative measurement of sulphur formation by steady-state and transient-state continuous cultures of autotrophic Thiobacillus species","Stefess, G.C.; Torremans, R.A.M.; De Schrijver, R.; Robertson, L.A.; Kuenen, J.G.","","1996","","oxidation; Thiobacillus Production Thiobacillus neapolitanus Application Waste water purification Sulfur Metabolism Sulfides Thiosulfates Oxidation Steady state Microorganism culture Performance evaluation Chemostat Environmental factor Continuous process Oxygen Com","en","journal article","","","","","","","","","","","","","",""
"uuid:b04d8cba-a39a-4088-9252-edc88c8c6f38","http://resolver.tudelft.nl/uuid:b04d8cba-a39a-4088-9252-edc88c8c6f38","Experimental monitoring of strain localization and failure behaviour of composite materials","Geers, M.G.D.; Peijs, T.; Brekelmans, W.A.M.; De Borst, R.","","1996","","A6220F Deformation and plasticity; A6220M Fatigue, brittleness, fracture, and cracks; A8140L Deformation, plasticity and creep; A8140N Fatigue, embrittlement, and fracture; compact tension specimens; composite; Composite material; COMPOSITE MATERIALS; Composite-material; Displacement; displacement field; displacement fields; elongation; entire loading process; experimental data; Failure; failure behaviour; Fracture mechanics; glass fibre reinforced plastics; Hentschel random access tracking system; Loading; local failure behaviour; Localization; mechanical; Model; MODELS; numerical model; numerical models; plastic Deformation; process zone; short glass fibre reinforced polypropylene; Strain; strain fields; strain fields process zones; strain localization; strain localization experimental monitoring; tensile strength; TENSION","en","journal article","Elsevier","","","","","","","","","","","","",""
"uuid:2aea856d-9b4b-490d-a608-e5518a68d8f2","http://resolver.tudelft.nl/uuid:2aea856d-9b4b-490d-a608-e5518a68d8f2","Attitude estimation for low-cost spacecraft equipped with GPS","Chu, Q.P.; Zwartbol, T.; van Woerkom, P.T.L.M.","","1995","The objective of the present paper is to give an overview of spacecraft attitude determination with GPS. GPS receivers with single or multiple antennas nowadays are low-cost sensors that constitute a valuable complement to existing sensor suites. GPS has been widely used for positioning and vehicle navigation. However, using GPS as an attitude sensor is still in the experimental stages. The paper starts with a review of existing GPS attitude determination concepts, with emphasis on integrated GPS involving also low-cost and robust attitude sensors such as accelerometers and magnetometers. Since spacecraft mission support facilities, simulations, and ground tests play important roles in the design, functional verification, and peiformance assessment of GPS based spacecraft attitude determination system, the paper also presents an architecture of mission support facilities and a discussion on simulation and test aspects. Finally the paper describes experiences and projects under development","attitude motion, estimation. Global Positioning System (GPS), GPS simulator, spacecraft attitude estimation, spacecraft operation and mission support, signal processing","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:e865c710-bde5-4911-b176-8e21cb483d5a","http://resolver.tudelft.nl/uuid:e865c710-bde5-4911-b176-8e21cb483d5a","Architectures for Real-Time On-Board Synthetic Aperture Radar Processing","Bierens, L.H.J.","Dewilde, P.M. (promotor)","1995","","synthetic aperture radar; remote sensing; real-time processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:517916f3-9692-4e11-beb8-634fbfedf8d1","http://resolver.tudelft.nl/uuid:517916f3-9692-4e11-beb8-634fbfedf8d1","Three Dimensional Simulation of Fabric Draping","Bergsma, O.K.","de Jong, Th. (promotor)","1995","","composite; fabric; simulation; fabrication process; finite element analysis","en","doctoral thesis","","","","","","","","","Aerospace Engineering","","","","",""
"uuid:9bc947ff-1401-48d0-8554-f52a44df43a1","http://resolver.tudelft.nl/uuid:9bc947ff-1401-48d0-8554-f52a44df43a1","A framework for knowledge-based map interpretation","den Hartog, J.E.","Backer, E. (promotor)","1995","","knowledge-based systems; map interpretation; image processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:4ba98093-206e-4268-bafd-786ff6b16820","http://resolver.tudelft.nl/uuid:4ba98093-206e-4268-bafd-786ff6b16820","Application of distributed artificial intelligence in complex modular critical applications","Zuidgeest, R.G.","","1995","This report provides an overview of the emerging technology Distributed Artificial Intelligence, in particular in the area of Distributed Problem Solving (DPS). DPS refers to coarse-grained (task-level) problem decomposition resulting in a number of expert or knowledge-based systems, generally called agents of which each exhibits some intelligence. The DPS technology has features that may reduce system design complexity through a highly modular approach and, consequently, may reduce life cycle costs through improved maintainability. These problems of complexity and maintenance are often faced with the design of complex critical applications (including many aerospace applications). DPS can provide a more natural solution with respect to system design, development, and maintenance. This report surveys DPS methods and techniques that have potential benefit for these critical applications. The two main approaches in DPS are discussed: blackboard systems and multi-agent systems. Further, the technology is evaluated along a number of criteria relevant for the envisaged applications. Based on this evaluation it is recommended to consider DPS technology in complex modular (decomposable) critical systems and let it be a driving technology for the overall system architecture.","artificial intelligence; decision making; distributed processing; expert systems; flight control; funtional analysis; man machine systems; man-computer interface; mission planning; modularity; real time operation; support systems; task complexity","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:4f31635c-e68b-483b-afb3-514c2d306bef","http://resolver.tudelft.nl/uuid:4f31635c-e68b-483b-afb3-514c2d306bef","Conditional simulation of spatial stochastic models for reservoir heterogeneity / Geconditioneerd simuleren van ruimtelijke stochastische processen voor reservoir heterogeniteit; Geconditioneerd Simuleren van Ruimtelijke Stochastische Processen voor Reservoir Heterogeniteit","Chessa, A.G.","Keane, M.S. (promotor); van Veen, F.R. (promotor)","1995","","stochastic processes; conditional distribution; reservoir characterisation","en","doctoral thesis","Delft University Press","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:6bf2a1be-5d99-4471-9b79-68e73d012b70","http://resolver.tudelft.nl/uuid:6bf2a1be-5d99-4471-9b79-68e73d012b70","Information extraction from multiple transmission electron microscopy images","Buist, A.H.","Kruit, P. (promotor); Van den Bos, A. (promotor)","1995","","high resolution transmission electron micrscopy (HRTEM); electron interference; image processing experimental design","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:8dd5f888-eb95-4350-a78c-5b1c7ef5e297","http://resolver.tudelft.nl/uuid:8dd5f888-eb95-4350-a78c-5b1c7ef5e297","A macro process planning system for machining operations","Jasperse, H.B.","Reijers, L.N. (promotor)","1995","","process planning; machining operations; CAD/CAM; Computer Aided Process Planning","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:629e564c-1e0e-4bdc-95ba-cbced0ee6c26","http://resolver.tudelft.nl/uuid:629e564c-1e0e-4bdc-95ba-cbced0ee6c26","Coastal Defence and the Environment: A guide to good practice","Anonymous, A.","Rijkswaterstaat","1995","CHAPTER 1: INTRODUCTION 1.1 The need for coastal defence 1.2 The need for coastal conservation 1.3 The need for environmental guidance CHAPTER 2: LEGAL AND ADMINISTRATIVE BACKGROUND 2.1 Coastal defence legislation and responsibilities 2.2 Environmental legislation and responsibilities 2.3 The Planning System 2.4 Environmental Assessment 2.5 Recommended Procedures CHAPTER 3: SHORELlNE PROCESSES AND HUMAN RESPONSE 15 Shore units 3.1.1 Type of coast 3.1.2 Sediment type 3.1.3 Lower shore 3.1.4 Upper shore 3.1.5. Supra-shore 3.1.6. Hinterland 3.2.Shoreline processes 3.3. Energy transfers 3.3.1 Wave energy 3.3.2 Tidal energy 3.3.3 Storm surges 3.3.4 Long term sea level change 3.3.5 Wave driven currents 3.3.6 Tidal currents Sediment Transport 3.4.1 Non-cohesive sediment transport 3.4.2 Non-cohesive transport rates 3.4.3 Suspended sediment transport 3.5 Biological processes 3.6 Shoreline erosion and flooding 3.6.1 Extreme events 3.6.2 Erosion 3.7 Shoreline morphology 3.7.1 Beach profiles 3.7.2 Longshore morphology 3.7.3 Cohesive shore morphology 3.8 Temporal factors 3.9 Spatial factors 3.10 Shoreline management - the human response to process 3.10.1 Hard engineering 3.10.2 Soft engineering CHAPTER 4: METHODS AND TECHNIQUES 4.1 Introduetion 4.2 General considerations 4.2.1 Preliminary stages 4.2.2 Design stage 4.2.3 Operational phase 4.2.4 Post project stages 4.2.5 General references 4.3 Offshore techniques Offshore breakwaters Stable bays Barrages and barriers 4.4 Low shore techniques Beach recharge: non-cohesive Beach recharge: cohesive Increase natural sedimentation rate: non-cohesive Increase natural sedimentation rate: cohesive 4.5 Upper shore techniques Sea walls Flood embankments Managed retreat 4.6 Supra shore Dune building Cliff strengthening Beach ridge restructuring CHAPTER 5: CASE HISTORIES Happisburgh to Winterton . Elmer Dinas Dinlle Colne barrier Mablethorpe to Skegness Hunstanton and Heacham Clacton to Jaywick Hamford Water Essex saltmarsh regeneration Aldeburgh Morecambe Dovercourt to BrambIe Island Windermoor Northey Island Benacre Broad Sand Bay Sizewell Beach Sefton Coast Tankerton slopes Fairlight Cove Pebbleridge Hart Warren dunes","shoreline process; sediment transport; shoreline management; beach recharge","en","report","Ministry of Agriculture, Fisheries and Food","","","","","","","","","","","","KWP-collection",""
"uuid:a94a9f3a-3763-4e65-ba59-bc08082c2680","http://resolver.tudelft.nl/uuid:a94a9f3a-3763-4e65-ba59-bc08082c2680","Stochastic approaches for damage evolution in standard and non-standard continua","Carmeliet, J.; De Borst, R.","","1995","","A0210 Algebra, set theory, and graph theory; A0230 Function theory, analysis; A0250 Probability theory, stochastic processes, and statistics; A0260 Numerical approximation and analysis; A0540 Fluctuation phenomena, random processes, and Brownian motion; A4630C Elasticity; A4630J Viscoelasticity, plasticity, viscoplasticity, creep, and stress relaxation; Civil; continuum; continuum damage; Damage; damage evolution; damage model; damage process; Deformation; Differential Equations; discretization; elasticity; Equation; Failure; failure mode; failure process; finite element; finite element analyses; finite element discretization; Heterogeneity; imperfections; inhomogeneity; internal length scale; length parameter; material properties; mechanical; Model; Monte Carlo technique; nonlocal damage; nonlocal damage model; nonstandard continua; Numerical analysis; Numerical simulation; numerical simulations; Numerical-simulation; quasi brittle materials; quasibrittle materials; Regularization","en","journal article","","","","","","","","","","","","","",""
"uuid:0fbba141-fecd-45ac-9169-bc56dbe76269","http://resolver.tudelft.nl/uuid:0fbba141-fecd-45ac-9169-bc56dbe76269","Van user interface DELFT3D naar SIMONA-preprocessor","Verboom, G.K.","","1995","","computerprogramma's; software; programma-ontwikkeling; software development; gegevensverwerking; data processing","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:b1a322a5-744c-4b2d-bec8-bc5fc84348be","http://resolver.tudelft.nl/uuid:b1a322a5-744c-4b2d-bec8-bc5fc84348be","Processing and recognition of handwriting in multimedia environments","Yang, L.","Arnbak, J.C. (promotor)","1995","","handwriting; processing; online handwriting recognition; multimedia","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:da02224c-bb8f-4465-a14c-c40d04958dd4","http://resolver.tudelft.nl/uuid:da02224c-bb8f-4465-a14c-c40d04958dd4","Automation of Assembly operations on Parts","Baartman, J.P.","Reijers, L.N. (promotor)","1995","","flexible assembly; industrial robots; process planning","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:a40db402-86f1-436a-90e3-1c9636e43ef4","http://resolver.tudelft.nl/uuid:a40db402-86f1-436a-90e3-1c9636e43ef4","A New Design Studio: Intelligent Objects and Personal Agents","Engeli, M.; Kurmann, D.; Schmitt, G.","","1995","","Architectural Design; Design Process; Virtual Reality; Artificial Intelligence; Personal Agents","en","conference paper","","","","","","","","","Architecture","","","","",""
"uuid:2d95ebc0-fc8e-45a0-92b7-a4bc9757e43b","http://resolver.tudelft.nl/uuid:2d95ebc0-fc8e-45a0-92b7-a4bc9757e43b","A MODEL ON THE WILLINGNESS TO PAY FOR DRIVER INFORMATION","Van der Zijpp, N.J.; Bovy, P.H.L.","","1995","","Conference 8525 driver information 8572 cost benefit analysis 0188 intelligent vehicle highway system 8531 decision process 2248 mathematical model 6473 Traffic theory (71)","en","journal article","","","","","","","","","","","","","",""
"uuid:3a513f11-7728-43b7-80a9-23352eedd847","http://resolver.tudelft.nl/uuid:3a513f11-7728-43b7-80a9-23352eedd847","A new method to convert unleveled marine seismic data to leveled split-spread data","Barens, L.M.; Van Borselen, R.G.; Fokkema, J.T.; Van den Berg, P.M.","","1995","","data acquisition data processing depth equations geophysical methods marine methods multiples numerical models seismic methods 20 Applied geophysics","en","conference paper","Society of Exploration Geophysicists","","","","","","","","","","","","",""
"uuid:f9710efe-fb83-46a0-a9a2-9c6e972e7b92","http://resolver.tudelft.nl/uuid:f9710efe-fb83-46a0-a9a2-9c6e972e7b92","Coastal processes along the Ebro, Po and Rhone deltas","Jimenez, J.A.; Capobianco, M.; Suanez, S.; Ruol, P.; Fraunie, P.; Stive, M.J.F.","","1995","","23 Coastal and Offshore Structures (CE); CIVIL; Coastal Environment; Coastal Environments; Coastal Processes; Complexity; Deltas; Hierarchies; Plains","en","conference paper","MEDCOAST, ANKARA","","","","","","","","","","","","",""
"uuid:499ce5f2-00e0-4526-83cb-51f1f594aa28","http://resolver.tudelft.nl/uuid:499ce5f2-00e0-4526-83cb-51f1f594aa28","Tracking human-computer dialogues in process control applications","Van Paassen, R.","","1995","","process control; man/machine interface; computer interface; human/computer dialogue","en","conference paper","Delft University of Technology","","","","","","","","","","","","",""
"uuid:52676974-dc71-4043-9db8-02c1a7616e1b","http://resolver.tudelft.nl/uuid:52676974-dc71-4043-9db8-02c1a7616e1b","Use of chemostat data for modelling intracellular-inulinase production by Kluyveromyces marxianus in a high-cell-density fed-batch process","Hensing, M.C.M.; Vrouwenvelder, J.S.; Hellinga, C.; van Dijken, J.P.; Pronk, J.T.","","1995","","modelling; chemostat; extracellular inulinase; Kluyveromyces marxianus; fed-batch process","en","journal article","Springer","","","","","","","","","","","","",""
"uuid:ed02ccfd-e8c7-4ba6-b3cc-622f64850584","http://resolver.tudelft.nl/uuid:ed02ccfd-e8c7-4ba6-b3cc-622f64850584","A theoretical and experimental approach to the geophone-ground coupling problem based on acoustic reciprocity","Vos, J.; Cremers, B.B.; Drijkoningen, G.G.; Fokkema, J.T.","","1995","","acoustical waves coupling data acquisition data processing Europe experimental studies field studies geophones geophysical methods geophysical surveys ground methods instruments Netherlands seismic methods surveys theoretical studies Western Europe 20 App","en","conference paper","Society of Exploration Geophysicists","","","","","","","","","","","","",""
"uuid:5b6a5a4f-76a9-4099-8465-a963127998c4","http://resolver.tudelft.nl/uuid:5b6a5a4f-76a9-4099-8465-a963127998c4","Gas-phase synthesis of nano-structured semiconductors. Advanced materials and processing","Goossens, A.; Schoonman, J.; Yoshimura, M.","","1995","","Review Semiconductor materials Nanostructure Thin film Porous material Crystal growth from vapors VLS growth Chemical vapor deposition Precipitation Laser beam Wet process Chemical etching Sol gel process Article synthese Semiconducteur Nanostructure Couc","en","journal article","","","","","","","","","","","","","",""
"uuid:33142fd2-80f3-4fc3-9255-f59b30f8e6cd","http://resolver.tudelft.nl/uuid:33142fd2-80f3-4fc3-9255-f59b30f8e6cd","EEN DYNAMISCH PARKEERRESERVERINGSSYSTEEM VOOR AUTOLUWE BINNENSTEDEN.; A DYNAMIC PARKING SPACE RESERVATION SYSTEM FOR CITY CENTRES","Minderhoud, M.M.; Bovy, P.H.L.","","1995","","Conference 8525 telecommunication 9117 data processing 8655 car park 0916 urban area 0313 traffic restraint 0633 location 9061 simulation 9103 pay parking 0933 traffic control 0658 selection 9072 efficiency 5911 netherlands 8078 Traffic and transport plan","nl","conference paper","COLLOQUIUM VERVOERSPLANOLOGISCH SPEURWERK, GEERDINKHOF 237, AMSTERDAM, 1103 PZ, NETHERLANDS","","","","","","","","","","","","",""
"uuid:57e489d7-b7bf-4d0c-8b31-3b394ca9ed7e","http://resolver.tudelft.nl/uuid:57e489d7-b7bf-4d0c-8b31-3b394ca9ed7e","System identification for rubust process control. Nominal models and error bounds","Hakvoort, R.G.","Bosgra, O.H. (promotor)","1994","","system identification; robust control; process control","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:ff8779b4-5a0f-4bec-b67c-9223ffc1427b","http://resolver.tudelft.nl/uuid:ff8779b4-5a0f-4bec-b67c-9223ffc1427b","Network interface unit ""Artemis product dissemination by E-mail""","van Dorp, A.L.C.","","1994","ARTEMIS (Africa Real-Time Environmental Monitoring Information System) is used by FAQ (Food and Agriculture Organisation of the United Nations) to support both its internal prograimnes on early warning and food security and desert locust control as well as developing countries in Africa by processing image information originating from the METEOSAT and NOAA satellites into products like estimated rainfall, cold cloud duration and vegetation coverage. With this information, the progress and status of the growing season can be closely monitored and areas with possible crop failures, as well as potential locust breeding areas, identified in an early stage. Moreover, fast availability of ARTEMIS product is crucial for adequate decision making in this area. Until now, these products are disseminated to the external users via hardcopies, diskettes or e-mail-attached files. In each of these cases, the ARTEMIS operator has to make these products available by performing manual actions. Under ESA-contract NLR is developing a ""Network Interface Unit"" (NIU), which will enhance ARTEMIS with the capability to automatically answer e-mail requests for ARTEMIS products. Selectable parts of standard ARTEMIS products can thus be transferred to the end user via standard e-mail in the digital format required.","Data compression; Data storage; Data transfer (computers); Delivery; Electronic mail; Image processing; Information dissemination; Product development; Satellite imagery; User requirements","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:20a2f625-8d91-46d8-b174-7afa2b568139","http://resolver.tudelft.nl/uuid:20a2f625-8d91-46d8-b174-7afa2b568139","Supervised boundary formation","Orange, C.M.","Groen, F.C.A. (promotor); Young, I.T. (promotor)","1994","","image analysis; processing; segmentation","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:092e7c97-015f-4d4c-95f1-9c4d5daab8c4","http://resolver.tudelft.nl/uuid:092e7c97-015f-4d4c-95f1-9c4d5daab8c4","Noise filtering of image sequences","Kleihorst, R.P.","Biemond, J. (promotor)","1994","","Digital signal processing; video processing; order statistics","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:fbe4c178-a8f2-4eaf-ab44-a50cab9b55bb","http://resolver.tudelft.nl/uuid:fbe4c178-a8f2-4eaf-ab44-a50cab9b55bb","Structured parallelization of a multi-block Navier-Stokes solver targeting distributed memory platforms","Geschiers, J.P.","","1994","SOLEQS is a sequential three-dimensional multi-block multi-zone compressible Navier-Stokes solver, produced by NLR. In this paper is described how this large-scale application code in the order of 120,000 lines can be parallelized towards different computing platforms in a structured and manageable way. This paralellization is done step-wise, which enables paying careful attention to e.g. the communication and load balancing aspects of the solver. An efficient implementation is developed specifically targeted for RISC-based IBM parallel platforms. Performance results and communication characteristics on an IBM workstation cluster and IBM SPl are presented and discussed.","Applications programs (computers); Balancing; Computational grids; Computer systems performance; Distributed processing; Domain decomposition; IBM computers; Interprocessor communication; Navier-Stokes equation; Parallel processing (computers); Run time (computers); Workstations","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:20693285-928e-47d2-9893-bf014f79f770","http://resolver.tudelft.nl/uuid:20693285-928e-47d2-9893-bf014f79f770","SPINE: Software platform for computer supported co-operative work in heterogenous computer and software environments","Loeve, W.; Baalbergen, E.H.","","1994","In development of High Performance Computing and Networking (HPCN) for simulation applications several targets have to be realized. It is recpiired to maximize not only computational speed but also to maximize applicability in engineering environments. For applications in which HPCN-based simulations are economically and technically necessary tools, it is also essential to minimize the time-to-market of new HPCN tools. HPCN environments are characterized by the application of a variety of computer platforms: vector computers in single emd parallel processor modes, scalar computers with small or large numbers of processors, workstations and a variety of other processing servers for support functions. The computer platforms usually are integrated in a network. The complexity of the heterogenous HPCN computer networks and of the collections of software systems implemented on the networks increases continuously. With increasing economic relevance of HPCN also the need of co-operation of users of HPCN and of developers of HPCN facilities appears to increase. Management of the complexity and co-operation requires tools to realize the above-mentioned targets. In the present paper a software platform (SPINE) is described. The software platform can be installed on arbitrary networks of UNIX computers. The platform can be used for realization of HPCN environments for specific application areas such as computational fluid mechanics or computer aided software development. The platform offers for the specific application the required functions in an integrated way. This integration concerns user interaction, information management, control (concerning specific software as well as concerning the computer network) and processing. The software platform is illustrated with examples from three instantiations of the platform. These are for computational fluid dynamics, computer aided software engineering and computer aided control engineering for non-linear dynamic systems. SPINE appears to be very flexible.","Distributed processing; Computer networks; Data transfer (computers); Data management; Files (tools); Graphical user interface; Information systems; Functional integration; Software engineering; Software tools; Unix (operating system); User requirements","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:5bc9ce9b-369a-4835-ae17-493236cbaf9f","http://resolver.tudelft.nl/uuid:5bc9ce9b-369a-4835-ae17-493236cbaf9f","Gegevensbehoefte van Noordzee- en Waddenzeemodellen","Boon, J.G.; Bokhorst, M.","","1994","","Waddenzee; North Sea; waterkwaliteitsmodellen; water quality models; gegevensverwerking; data processing; modelijking; model calibration","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:5dc5ea88-bff2-45b4-905b-98f233f3c320","http://resolver.tudelft.nl/uuid:5dc5ea88-bff2-45b4-905b-98f233f3c320","Subsampling Methods for Image Sequence Coding","Belfor, R.A.F.","Biemond, J. (promotor)","1994","","image coding; video coding; image processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:c5f2b7dc-a94d-40b9-ac7d-e97f5198137e","http://resolver.tudelft.nl/uuid:c5f2b7dc-a94d-40b9-ac7d-e97f5198137e","Some future directions in computational failure mechanics","De Borst, R.; Carmeliet, J.; Pamin, J.; Sluys, L.J.","","1994","","39 Structural mechanics (Ah); Aero; brittle Materials; computer Techniques; CONFERENCE; Damaging; Deformation; Failure; Finite element method; fracture; fracturing; Heterogeneity; mechanical Engineering; Mechanics; Model; Softening; Stochastic Processes","en","conference paper","Kluwer Academic Publishers","","","","","","","","","","","","",""
"uuid:d292269c-4621-4e50-a9ce-f12cb2f11e9c","http://resolver.tudelft.nl/uuid:d292269c-4621-4e50-a9ce-f12cb2f11e9c","Image analysis of surf zone hydrodynamics","Redondo, J.M.; Rodriguez, A.; Bahia, E.; Falques, A.; Gracia, V.; Sanchez-Arcilla, A.; Stive, M.J.F.","","1994","","Hydrodynamics Image analysis Water waves Coastal zones Image recording Oceanography Surfaces Ocean currents Measurements Tracking position Surf zone hydrodynamics Digital image processing Video image recording Sea surfaces Longshore current Dispersion mea","en","conference paper","Ministerio de Educacion y Ciencia (Spain); Office of Naval Research; Generalitat de Catalunya; Japan Society of Civil Engineers; E.T.S. d'Enginyers de Camins; et al","","","","","","","","","","","","",""
"uuid:44236190-814d-4c8f-91a9-2d5b7ae073b4","http://resolver.tudelft.nl/uuid:44236190-814d-4c8f-91a9-2d5b7ae073b4","Process Development of Thermal HydroDechlorination","ten Kate, A.J.B.","van den Bleek, C.M. (promotor); van den Berg, P.J. (promotor)","1993","","chlorinated waste treatment; process development; modelling","en","doctoral thesis","Delft University Press","","","","","","","","Applied Sciences","","","","",""
"uuid:7fef8c52-6adc-45d5-8913-6d3fe044139b","http://resolver.tudelft.nl/uuid:7fef8c52-6adc-45d5-8913-6d3fe044139b","Parallel parsing","De Vreught, J.P.M.","Van Westrhenen, S.C. (promotor)","1993","This topic of this thesis is parallel parsing using context-free grammars and attribute grammars. The first parts concentrates on slow and fast parallelism of the parsing and recognition process using double dotted items. The second part describe the maximum derivation length of acyclic grammars, the decoration of parses, and a raking algorithm on binary DAGs.","parallel algorithms; natural language processing","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:85455914-6629-4421-8c77-27cc44e771ed","http://resolver.tudelft.nl/uuid:85455914-6629-4421-8c77-27cc44e771ed","Digital particle image velocimetry: Theory and application","Westerweel, J.","Nieuwstadt, F.T.M. (promotor)","1993","","Fluid dynamics; digital image processing","en","doctoral thesis","Delft University Press","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""
"uuid:2ed11518-0c01-4989-aa88-a02edb0c9217","http://resolver.tudelft.nl/uuid:2ed11518-0c01-4989-aa88-a02edb0c9217","Technologische Anwendung der Partikelformanalyse","Drescher, S.","Scarlett, B. (promotor)","1993","","particle shape analysis; ceramic processing","de","doctoral thesis","Delft University Press","","","","","","","","Applied Sciences","","","","",""
"uuid:290eae73-321e-425b-8fc2-e4c9e75c7b83","http://resolver.tudelft.nl/uuid:290eae73-321e-425b-8fc2-e4c9e75c7b83","Determining particle size distributions from video images by use of image processing","De Graaff, J.; Slot, R.E.","","1993","Recently a lot of research is being done on cohesive sediment. It plays a major role in the shoaling of harbours and waterways, and in some serious environmental problems. To predict cohesive sediment transport, information is needed about the distributions of size and settling velocities. Many methods exist to determine sizes suspended particles, but most are not applicable to cohesive sediment flocs, because of their fragility. If not at sampling, the flocs break at the subsequent analysis by for example the Coulter Counter or the pipet method. In case of analysis by the Owen tube another problem occurs next to the floc break up at sampling: the long duration of the analysis leads to additional flocculation and causes the measured distribution to be even more unrealistic. To solve these problems, exposures are made by underwater cameras, which give instantaneous information about the undisturbed samples. From one exposure the floc sizes can be determined, and from two successive exposure with known time between them, the settling velocities can be determined. 50 far, the analysis of exposures of flocs was mainly done by hand. Image processing by computer provides a way to do this automatically. It saves time, and consequently more flocs can be analyzed, leading to more representative distributions. The subject of th is report is the development and testing of an image processing program to determine the size distribution. The program is applied to digitized exposures, as can be made by a framegrabber. The framegrabber converts a recording on tape or from a ccd camera into a matrix of digits, the value of each digit representing the brightness of the corresponding pixel. From this grey value image, the image processing program has to distinguish the relevant objects, in other words, make a binary image, consisting of object pixels and non object pixels. This is quite complicated, due to inevitable interferences on the exposures like background features and shadow effects. After producing the binary image, the program has to determine partiele sizes and calculate and plot the size distributions. This report describes the problems that are met when segmenting objects from a background (chapter 2), the mathematica I methods to overcome them (chapter 3), some tests with the software that has been developed (chapter 4), and the results of these tests (chapter 5). The tests have been done on exposures with reference objects (ideal objects and background). The results are also visualized in the appendices. Also details on the software, that was developed using a software package for image processing, TCU, are given in the appendices.","video processing; grainsize","en","report","TU Delft, Department Hydraulic Engineering","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:43f35c27-3424-471e-b77a-c72e421ef7af","http://resolver.tudelft.nl/uuid:43f35c27-3424-471e-b77a-c72e421ef7af","Some aspects of aircraft response to atmospheric turbulence","Noback, R.","","1993","Aspects of the requirements for and the calculation of design loads due to atmospheric disturbances are discussed. Special attention has been paid to the relation between discrete gusts and continuous turbulence. It is recommended that a worst case gust model, as described in this report, be further developed.","aircraft safety; atmospheric turbulence; gust loads; dynamic response; gusts; atmospheric models; power spectra; random processes; Fourier transformation; aerodymamic loads; mathematical models; aircraft design","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:131ab1e9-e52c-4e0b-9ea4-7329df3c8796","http://resolver.tudelft.nl/uuid:131ab1e9-e52c-4e0b-9ea4-7329df3c8796","Biangular decomposition of seismic data","Ter Doest, P.J.K.; Vercruijsse, P.A.; Fokkema, J.T.","","1993","","amplitude amplitude versus angle analysis data processing deconvolution elastic waves geophysical methods inclined seismic interface mathematical methods Radon transforms reflection seismic methods seismic waves 20 Applied geophysics","en","conference paper","Society of Exploration Geophysicists","","","","","","","","","","","","",""
"uuid:6742381d-409c-4065-9dbd-5fca2178ed83","http://resolver.tudelft.nl/uuid:6742381d-409c-4065-9dbd-5fca2178ed83","Kinetics of tungsten low-pressure chemical-vapor deposition using WF6 and SiH4 studied by in situ growth-rate measurements","Ammerlaan, J.A.M.; Van der Put, P.J.; Schoonman, J.","","1993","","Thin film; Crystal growth; Experimental study; Tungsten; Chemical vapor deposition; Low pressure; Kinetics; In situ; Process control","en","journal article","","","","","","","","","","","","","",""
"uuid:105c83ec-6c79-4296-9ddb-2b49128dc0f1","http://resolver.tudelft.nl/uuid:105c83ec-6c79-4296-9ddb-2b49128dc0f1","Magnetic field effects on switching noise in a quantum point contact","Liefrink, F.; Scholten, A.J.; Dekker, C.; Dijkhuis, J.I.; Alphenaar, B.W.; Van Houten, H.; Foxon, C.T.","","1992","","A7220M Galvanomagnetic and other magnetotransport effects semiconductors/insulators; A7270 Noise processes and phenomena in electronic transport; A7320A Surface states, band structure, electron density of states; A7340L Electrical properties of semiconductor to semiconductor contacts, p n junctions, and heterojunctions; aluminium compounds; conduction band; conduction band bottom; conduction bands; fluctuations; g factor; GaAs Al sub x Ga sub 1 x As; gallium arsenide; III V semiconductors; INSPEC; interface electron states; Lande g factor; magnetic field; magnetic field effects; model; noise; point contacts; quantum Hall effect; quantum Hall regime; quantum point contact; quantum size effect; random noise; semiconductor; size effect; switching noise; temporal electrostatic fluctuations; transconductance; zero magnetic field","en","journal article","","","","","","","","","","","","","",""
"uuid:55d9afa5-0d85-4ad7-b816-90e75f8510be","http://resolver.tudelft.nl/uuid:55d9afa5-0d85-4ad7-b816-90e75f8510be","Long waves on the North Sea: An investigation of the requirements on measurements and data processing","Valk, C.F. de","","1992","","North Sea; gegevensverwerking; data processing; golfmeting; wave measurement; lange golven; long waves","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:ae6a6823-ffb7-43b7-b599-f75e966470fe","http://resolver.tudelft.nl/uuid:ae6a6823-ffb7-43b7-b599-f75e966470fe","Computing with cables: Towards massively parallel neuro computers","Klaassen, A.J.","Van de Goor, A.J. (promotor)","1992","","neural information processing; massively parallel computing; artificial intelligence","en","doctoral thesis","Delft University Press","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:4f9661c3-0069-452b-89f7-185e8bb064a1","http://resolver.tudelft.nl/uuid:4f9661c3-0069-452b-89f7-185e8bb064a1","Multi-sensor data fusion in a distributed environment; architectural solutions","Zuidgeest, R.G.","","1992","The human operator observing the real world is confronted with a huge amount of data from multiple sensors observing that world. Multi-sensor data fusion (MSDF) is one of the emerging fields in advanced information processing, concerned with fusing sensor data from these multiple sensors. Automated MSDF can help the operator by processing sensor data into concise and surveyable information, that is more useful than every sensor can provide. The merit of MSDF can be increased by employing the knowledge of the human operator about the real world, the sensor systems and the fusion process. With the aid of this knowledge, automated MSDF can assign meaning to sensor data and is able to reason about the observed world at a high level as humans do. Often, MSDF has an inherent distributed character, spatially as well as functionally. Architectural solutions should cope with this character. This report focuses on the possibility to employ distributed artificial intelligence techniques in MSDF applications, in particular command and control applications such as battlefield surveillance. A number of candidate architecture results in which MSDF functions as well as aspects of distributed networks are demonstrated.","Artificial intelligence; Command and control; Architecture (computers); Signal processing; Data integration; Multisensor applications; Surveillance; Distributed processing; Expert systems","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:0b9deb1f-3b28-4f1a-850e-d18e32a83d9e","http://resolver.tudelft.nl/uuid:0b9deb1f-3b28-4f1a-850e-d18e32a83d9e","The turbulent boundary layer: Spanwise structure, evolution of low-velocity regions and response to artificial disturbances","Brand, A.J.","Nieuwstadt, F.T.M. (promotor)","1992","","Fluid Dynamics: coherent structures; flow visualization; fluid injection; stability analysis; turbulent boundary layer; Digital Image Processing: image analysis; logical operations; morphological operations","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:e6cedfe5-c16c-476b-a377-7194db11054f","http://resolver.tudelft.nl/uuid:e6cedfe5-c16c-476b-a377-7194db11054f","Low-frequency noise in quantum point contacts","Liefrink, F.; Scholten, A.J.; Dekker, C.; Dijkhuis, J.I.; Eppenga, R.; Van Houten, H.; Foxon, C.T.","","1992","","A7270 Noise processes and phenomena in electronic transport; A7340L Electrical properties of semiconductor to semiconductor contacts, p n junctions, and heterojunctions; aluminium compounds; B2530B Semiconductor junctions; electron traps; electrostatic potential; fluctuations; GaAs Al sub x Ga sub 1 x As; gallium arsenide; III V semiconductors; INSPEC; low frequency noise; noise; point contacts; quantum point contact; quantum point contacts; resistance; resistance noise; single electron trap; trapped electron","en","conference paper","IOS Press","","","","","","","","","","","","",""
"uuid:98f4fe21-971f-448a-a403-1bba2323cdf2","http://resolver.tudelft.nl/uuid:98f4fe21-971f-448a-a403-1bba2323cdf2","Molecular and macroscopic orientational order in aramid solutions: A model to explain the influence of some spinning parameters on the modulus of aramid yarns","Picken, S.J.; Van der Zwaag, S.; Northolt, M.G.","","1992","","Phthalamide polymer Aromatic polymer Solvent spinning Lyotropic state Nematic state Property processing relationship Mechanical properties Elastic modulus Concentration effect Temperature effect Short range order Aramid fiber Mean field approximation Math","en","journal article","","","","","","","","","","","","","",""
"uuid:67408fa6-8c22-4586-95b4-9a73322605ee","http://resolver.tudelft.nl/uuid:67408fa6-8c22-4586-95b4-9a73322605ee","Low-frequency noise of quantum point contacts in the ballistic and quantum Hall regime","Liefrink, F.; Scholten, A.J.; Dekker, C.; Eppenga, R.; Van Houten, H.; Foxon, C.T.","","1991","","A7220M Galvanomagnetic and other magnetotransport effects semiconductors/insulators; A7270 Noise processes and phenomena in electronic transport; A7335 Mesoscopic systems and quantum interference; backscattering; ballistic regime; conductance; electrostatic potential; fluctuations; INSPEC; low frequency noise; magnetic field; noise; point contacts; quantum Hall effect; quantum Hall regime; quantum interference phenomena; quantum point contact; quantum point contacts; resistance; resistance noise; spin degeneracy; strong magnetic field","en","journal article","","","","","","","","","","","","","",""
"uuid:06a694bc-6bdc-4bf3-9014-8eeac94a7623","http://resolver.tudelft.nl/uuid:06a694bc-6bdc-4bf3-9014-8eeac94a7623","Spontaneous resistance switching and low-frequency noise in quantum point contacts","Dekker, C.; Scholten, A.J.; Liefrink, F.; Eppenga, R.; Van Houten, H.; Foxon, C.T.","","1991","","A7220J Charge carriers: generation, recombination, lifetime, and trapping semiconductors/insulators; A7270 Noise processes and phenomena in electronic transport; A7320D Electron states in low dimensional structures; A7340L Electrical properties of semiconductor to semiconductor contacts, p n junctions, and heterojunctions; carrier mobility; charge transport; conductance; electron device noise; electron traps; electrostatic potential; frequency dependence; INSPEC; local electrostatic potential; low frequency noise; low frequency noise spectroscopy; model; noise; point contacts; quantum point contact; quantum point contacts; quantum size effect; resistance; resistance switching; semiconductor quantum dots; semiconductors; size effect; spectral density; temperature dependence; transport; trapping; white noise","en","journal article","","","","","","","","","","","","","",""
"uuid:8e3bec2a-57d4-4eda-b62f-3dba920a7c99","http://resolver.tudelft.nl/uuid:8e3bec2a-57d4-4eda-b62f-3dba920a7c99","Some aerospace applications of estimation theory","Moek, G.","","1991","In this report, the application of estimation theory to three problems from aerospace research is described. The applications considered deal with spacecraft attitude estimation, software reliability modelling and estimation, and with a satellite-based navigation algorithm. Maximum likelihood estimation is used in the software reliability case and mean-square estimation in the other two applications. An adaptive iterated extended Kalman filter is evaluated on the basis of simulated and real spacecraft attitude measurement data. Results based on different combinations of attitude measurements and sensor models are compared. A satellite-based navigation algorithm using a spline approximation of a vehicle trajectory and a recursive non-linear least squares estimation algorithm is described. Its application to simulated pseudo range measurements shows the potential of accurate position determination. Maximum likelihood parameter estimation for four software reliability models is analysed. Conditions are derived pertaining to the existence of bounded solutions of the likelihood equations. Uniqueness of such solutions is proofed for two of the four models. Some results of the application to both simulated and real software failure time are given.","satellite attitude control; computer-program integrity; reliability analysis; satellite navigation systems; adaptive filters; stochastic processes; least squares method; non-linear systems; algorithms; state estimation; error analysis; Kalman filters; spline functions; maximum likelihood estimates","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:941e5999-645f-43cb-9cb7-07dcf74289b7","http://resolver.tudelft.nl/uuid:941e5999-645f-43cb-9cb7-07dcf74289b7","A study on focusing in telescience using the TELEPODI breadboard","Pinches, C.L.; Kuijpers, E.A.; Spaan, F.","","1991","Microgravity experiments operated using telescience concepts face certain constraints. The effect of these constraints on the focusing task are identified. Four types of focusing strategy, which provide a natural means of focusing in the presence of these constraints, are described. Three specific strategies are implemented as part of the teleoperated prototype optical diagnostic instrument breadboard using a task dependent man machine interface. Evaluation of these strategies is bases on considerations of accuracy of focus, working range and human factors issues.","optical equipment; television camera; remote control; focusing; man-machine systems; teleoperators; visual acuity; image processing; data transmission; time lag","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:0d130e9a-6419-4d0c-8e9a-c6a3f4eab4cd","http://resolver.tudelft.nl/uuid:0d130e9a-6419-4d0c-8e9a-c6a3f4eab4cd","Prestack imaging in the double transformed Radon domain","Tatalovic, R.; Dillen, M.W.P.; Fokkema, J.T.","","1991","","data processing geophysical methods Radon transformation seismic methods seismic migration transformations 20 Applied geophysics","en","conference paper","Society of Exploration Geophysicists, International Meeting and Exposition","","","","","","","","","","","","",""
"uuid:663e99b9-f9fb-41f5-99ec-a56a1de9c389","http://resolver.tudelft.nl/uuid:663e99b9-f9fb-41f5-99ec-a56a1de9c389","Complexity analysis in the double transformed Radon domain","Vissinga, M.; Fokkema, J.T.","","1991","","data processing discontinuities geophysical methods Radon methods seismic methods 20 Applied geophysics","en","conference paper","","","","","","","","","","","","","",""
"uuid:43bd3d99-a2c6-453b-a80a-a96184ba426c","http://resolver.tudelft.nl/uuid:43bd3d99-a2c6-453b-a80a-a96184ba426c","Comparing stacking velocities","Vercruijsse, P.A.; Fokkema, J.T.","","1991","","data processing geophysical methods heterogeneous materials layered materials seismic methods seismic migration 20 Applied geophysics","en","conference paper","Society of Exploration Geophysicists, International Meeting and Exposition","","","","","","","","","","","","",""
"uuid:af758a9e-bfbc-484d-b25c-4c1be7ec77f0","http://resolver.tudelft.nl/uuid:af758a9e-bfbc-484d-b25c-4c1be7ec77f0","Surface-related multiple elimination based on reciprocity","Van Borselen, R.G.; Thorbecke, J.; Fokkema, J.T.; Van den Berg, P.M.","","1991","","data processing filters geophysical methods marine methods seismic methods 20 Applied geophysics","en","conference paper","Society of Exploration Geophysicists, International Meeting and Exposition","","","","","","","","","","","","",""
"uuid:ee82320c-dec8-4fbb-96fd-5a424a280c09","http://resolver.tudelft.nl/uuid:ee82320c-dec8-4fbb-96fd-5a424a280c09","Prestack depth migration in the double transformed Radon domain","Tatalovic, R.; Fokkema, J.T.","","1991","","data processing; geophysical methods; Radon transforms; seismic methods 20; Applied geophysics","en","conference paper","","","","","","","","","","","","","",""
"uuid:3db3eeb2-7663-42cc-bb72-a8aa1c10a67e","http://resolver.tudelft.nl/uuid:3db3eeb2-7663-42cc-bb72-a8aa1c10a67e","Extrapolation operators by beam tracing","Kremer, S.R.G.; Fokkema, J.T.; Wapenaar, C.P.A.","","1991","","amplitude beam tracing data processing extrapolation geophysical methods imagery seismic methods 20 Applied geophysics","en","conference paper","","","","","","","","","","","","","",""
"uuid:31c7ac69-daa1-48fe-98bb-12e2fd0d7edd","http://resolver.tudelft.nl/uuid:31c7ac69-daa1-48fe-98bb-12e2fd0d7edd","Telescience pilot experiment results using telePODI: Final report","Kuijpers, E.A.","","1990","The Prototype Optical Diagnostic Instrument (PODI) has been extended to Teleoperated PODI (TelePODI) to allow for the study of telescience and image processing for microgee related instrumentation. TelePODI has been integrated in the Telescience Test Bed, phase I, at ESTEC to simulate remote control. Telescience pilot experiments have been executed during two evaluation periods at ESTEC. The experiments were related to: optical systems check, experiment cell exchange, liquid handling, diagnostic performance verification, telemetry and telecommand handling, video handling, preprogramming, reprogramming, remote execution of plume experiment. The evaluation included experiments in which TelePODI was integrated in the Telescience Test Bed at ESTEC and controlled via Olympus from NLR Noordoostpolder. About the integration and evaluation in the Telescience Test Bed is reported.","optical measuring instruments; image processing; test facilities; space processing; breadboard models; spaceborne experiments; remote control; space environment simulation","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:031d71d0-dd9d-4eab-bc08-0ca654dfa91b","http://resolver.tudelft.nl/uuid:031d71d0-dd9d-4eab-bc08-0ca654dfa91b","Low-level image processing architectures: Compared for some non-linear recursive neighbourhood operations","Komen, E.R.","Young, I.T. (promotor)","1990","","image processing; computer architectures; algorithms","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","",""
"uuid:e9f16b13-0eaf-4bb2-8dcf-81e64a3dc60e","http://resolver.tudelft.nl/uuid:e9f16b13-0eaf-4bb2-8dcf-81e64a3dc60e","Telescience experiments using the prototype optical diagnostic instrument (PODI)","Kuijpers, E.A.","","1990","The breadboard called PODI (Prototype Optical Diagnostic Instrument) was developed following a study for a general facility for fluid experimental research on fluid physics in a space laboratory. PODI has been developed further for the study of telescience and image processing related to microgee instrumentation. The extended breadboard called TelePODI, is being used for various telescience experiments. Some intermediate results are reported and discussed.","spaceborne experiments; fluid flow; flow visualization; Schlieren photography; breadboard models; cameras; focussing; teleoperators; image processing; data compression","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:d7711415-bf6c-4f87-87d1-4f1b537ca28a","http://resolver.tudelft.nl/uuid:d7711415-bf6c-4f87-87d1-4f1b537ca28a","Bayesian estimation for decision-directed stochastic control","Blom, H.A.P.","","1990","Stochastic processes with a decision-directed control are considered as controlled Markov processes, the state space of which is hybrid; i.e. a product of a discrete set and a Euclidean space. This approach yields a mathematical model for many problems of decision-directed stochastic control. In general, the observations made from the ""past"" and ""present"" Markov state do not lead to a perfect knowledge of the ""present"" discrete-valued state component. In such situations, the optimal control may be obtained by applying two successive steps: - Bayesian estimation (evaluation of the conditional distribution) of the Markov process, - Optimal control of the conditional distribution on the basis of perfect knowledge of its evolution. Unfortunately, the evaluation of each of these steps implies significant difficulties in case the Markov state is hybrid. The thesis is directed to the modelling of hybrid state Markov processes and to solving problems that are associated with the Bayesian estimation of these processes.","martingales; stochastic processes; Markov processes; algorithms; smoothing; mathematical models; optimal control; decision theory; state estimation; nonlinear filters; tracking filters","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:131f8c2e-ede8-43bc-b1bc-1706f59727a2","http://resolver.tudelft.nl/uuid:131f8c2e-ede8-43bc-b1bc-1706f59727a2","One-pass inversion of marine seismic data","Koster, J.K.; Ziolkowski, A.M.; Fokkema, J.T.; Tatalovic, R.; Tijdens, E.","","1990","","algorithms amplitude data processing deconvolution geophysical methods inverse problem marine methods normal moveout reflection seismic methods variations 20 Applied geophysics","en","conference paper","","","","","","","","","","","","","",""
"uuid:fa6501e3-7d05-4c82-acd6-a160a229761d","http://resolver.tudelft.nl/uuid:fa6501e3-7d05-4c82-acd6-a160a229761d","Reconstruction of the holocene evolution of the Dutch coast. The Dutch coast. Paper no 2","Zitman, T.J.; Stive, M.J.F.; Wiersma, H.","","1990","","Coastal zones GEOLOGY Geomorphology Holocene period coastal evolution geography coastal genesis programme coastal processes 471 (Marine Science and Oceanography) 481 (Geology and Geophysics); evolution","en","conference paper","","","","","","","","","","","","","",""
"uuid:ac1adaff-fa2c-49a7-ae39-109cbcd11868","http://resolver.tudelft.nl/uuid:ac1adaff-fa2c-49a7-ae39-109cbcd11868","Large-Scale Coastal Evolution Concept","Stive, M.J.F.; Roelvink, D.A.; De Vriend, H.J.","","1990","","53 Waterways (CE); CIVIL; Coastal Engineering; Coastal Morphology; Coastal Processes; Estuaries; evolution; netherlands; Sand Transport; Sea Level","en","conference paper","American Society of Civil Engineers","","","","","","","","","","","","",""
"uuid:b57ae857-6463-439b-b483-051a6fa03313","http://resolver.tudelft.nl/uuid:b57ae857-6463-439b-b483-051a6fa03313","Prestack shotpoint and common midpoint migration using the split-step Fourier algorithm","Stoffa, P.L.; Sen, M.K.; Fokkema, J.T.; Kessinger, W.","","1990","","algorithms data processing Fourier analysis frequency domain analysis geophysical methods imagery lateral heterogeneity raypaths seismic methods seismic migration 20 Applied geophysics","en","conference paper","","","","","","","","","","","","","",""
"uuid:c80fc653-291e-4501-870c-283607d9d55c","http://resolver.tudelft.nl/uuid:c80fc653-291e-4501-870c-283607d9d55c","Nearshore circulation","Battjes, J.A.; Sobey, R.J.; Stive, M.J.F.","","1990","Shelf circulation is driven primarily by wind- and tide-induced forces. It is laterally only weakly constrained so that the geostrophic (Coriolis) acceleration is manifest in the response. Nearshore circulation on the other hand is dominated by wave-induced forces associated with shallow-water. wave breaking and is confined to a relatively narrow shore-bounded area. For brevity and for clarity of presentation, only wave-induced nearshore circulation is considered in this chapter, with zero mean flow far offshore. The purpose of this chapter is to give a state-of-the-art review of the subject, rather than a presentation of recent research results. Emphasis is placed on the physics. Mathematical formulations of the most important relations are given, but solution techniques are only briefly referred to without analytical derivations or numerical algorithms.","wave action; longshore current; nearshore processes; wave driven currents","en","conference paper","Harvard University Press","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:3c0275e7-cde8-46ff-a06c-02e47014e5f4","http://resolver.tudelft.nl/uuid:3c0275e7-cde8-46ff-a06c-02e47014e5f4","Large-scale coastal evolution concept. The Dutch coast. Paper No. 9","Stive, M.J.F.; Roelvink, D.A.; De Vriend, H.J.","","1990","","Coastal zones GEOLOGY Netherlands OCEANOGRAPHY Sea Level Changes FLOW OF WATER Sediment Transport Coastal evolution coastal processes holocene period cross shore flow longshore transport 471 (Marine Science and Oceanography) 481 (Geology and Geophysics) 6; evolution; netherlands","en","conference paper","","","","","","","","","","","","","",""
"uuid:e6c0fa55-ac6b-4805-aa6f-6945d2c5d49f","http://resolver.tudelft.nl/uuid:e6c0fa55-ac6b-4805-aa6f-6945d2c5d49f","Remote sensing en landmeetkunde","Looyen, W.J.","","1989","Working with Remote Sensing data can be generalized into four categories: - registration - processing - interpretation - presentation. A short overview of these four categories will be given. Emphasis will be put on the Dutch airborne multichannel pushbroom scanner CAESAR showing specific geodetic points of interest in working with Remote Sensing data.","remote sensing; classifying; imaging techniques; image processing; geometric rectification (imagery); multispectral band scanners; photomapping; aerial photography; satellite imagery","nl","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:a1dc9a00-4c28-4406-a959-e24693e9480a","http://resolver.tudelft.nl/uuid:a1dc9a00-4c28-4406-a959-e24693e9480a","A measurement system for production flight tests of new aircraft","van de Leijgraaf, R.; van Dorp, W.A.; Storm van Leeuwen, S.; Udo, R.","","1989","Today there is an increasing pressure on the fast delivery of a newly manufactured aircraft and on the strict guarding of its performance figures. It is therefore essential to reduce the necessary time between completion of the new aircraft and the moment of deliverance to an airline. This means that production flight testing time must be minimized while the calculation-accuracy of the performance figures must preferably be increased, or must at least stay at the same level. With this goal in mind, a measurement system has been developed for the production flight tests with the Fokker 50 and the Fokker 100 production aircraft. This system had the following design objectives: - portable - Installation time less than 3 hours - on board processing facilities for the following purposes: - calibration of measured data - real time calculation of performance parameters - c[uick look facilities for the flight test engineer - data available immediate after the flight for the Fokker engineers. The system can be divided into two parts. Fart One is the Data Acquisition Module which gathers data from the aircraft systems and records this raw data on magnetic tape for backup. The Data Acquisition Module is developed using the basic modular concept and basic components of the measurement systems used in the prototypes of the Fokker 50 and Fokker 100 aircraft. The second part is the Data Processing Module which performs the calibration of the measured data, the real time calculation of performance parameters and provides quick look presentation facilities during the measurements of the production flight tests. For the Data Processing Module the VME bus computer concept is adopted. The processed data is recorded on magnetic tape in a computer compatible format and is ready for analysis immediately after the flight. In this paper, the measurement system for the production flight tests will be described with the emphasis on the hardware and software description of the Data Processing Module. Furthermore the use of the system in the operational phase and experience from the first few flight with this system in the Fokker 50 and Fokker 100 aircraft will be given.","flight test instruments; measuring instruments; data processing; data acquisition; aircraft production; applications programs (computers); data recording; Fokker Aircraft; real time operation; equipment specification","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:8469d331-9912-4f15-a24d-8fffe7e68bc8","http://resolver.tudelft.nl/uuid:8469d331-9912-4f15-a24d-8fffe7e68bc8","Development of silt measuring methods: Electronic signal processing, part II - a second generation acoustic measuring system","Gervink, B.J.G.M.; Dorenbos, G.J.; Berkhoudt, N.","","1989","","gegevensverwerking; data processing; zwevend-transportmeters; suspended load meters; akoestische meting; acoustic measurement; silttransport; silt transport; slibgehaltemeters; mud content meters","en","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:e69767f9-328f-4183-b40d-804fcd587537","http://resolver.tudelft.nl/uuid:e69767f9-328f-4183-b40d-804fcd587537","The use of MEBAS in creating a simulation environment for compression and encryption","Kordes, F.L.G.; Schuurman, J.J.","","1989","In order to get a better sight on the performance of compression and encryption algorithms under various conditions a simulation environment is needed. Within this simulation environment It must be possible to investigate the influence of code tables, channel characteristics and frame formats on the performance of the algorithms. In this report the use of MEBAS in creating this simulation environment is studied.","information systems; systems engineering; software tools; data management; computerized simulation; subroutine libraries (computers); frames (data processing); source programs; data compression","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:589cfc34-88f9-4935-a58c-4ee20f2b0e91","http://resolver.tudelft.nl/uuid:589cfc34-88f9-4935-a58c-4ee20f2b0e91","Zeezandwinning: Onderbouwend rapport (milieu effect rapportage RON ; discussienota kustverdediging, technisch rapport 10)","Ribberink, J.S.","","1989","","zandwinning; sand dredging; kustmorfologie; coastal morphology; kustverdediging; coast protection; sedimenttransportprocessen; sediment transport processes","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:3809dc56-bd6b-4dcf-848e-f9ccb2732191","http://resolver.tudelft.nl/uuid:3809dc56-bd6b-4dcf-848e-f9ccb2732191","Processing of advanced ceramics which have potential for use in gas turbine aero engines","Maccagno, T.M.","","1989","SiaN4 and SiC based advanced ceramics that have been produced by hot isostatic pressing (HIP'ing) have good potential to be used as hot section components in gas turbine aero engines. This report provides background for an NAE-SML investigation of this potential. The report begins with a general overview of the many fabrication methods that have been used to produce both monolithic ceramics and SiC whisker reinforced composite ceramics. This is followed by a comprehensive survey of past efforts to produce SiaN4 and SiC based ceramics by HIP 'ing. It is apparent that many of these efforts have involved HIP'ing of material that has already been densified by sintering, but such an approach does not really allow the fulI benefits of HIP processing to be realized. On the other hand, HIP'ing of SiaN4 based composite produced by reaction bonding may result in ceramic material of superior quality. It also appears that manY previous efforts have resorted to incorporating densifying aids into the starting material, even though high temperature properties may suffer as aresult. It is suggested th at HIP'ing of vacuum encapsulated SiaN 4 or SiC particulate which contains SiC whiskers for reinforcement, but which does not contain densifying aids, may be a method of producing ceramic material of sufficient quality to be considered for use in gas turbine engines.","Gas turbine Engines - ceramics; Ceramics - Processing; Hot pressing","en","report","National Research Council Canada","","","","","","Campus only","","","","","","",""
"uuid:b677e5b1-cd5c-4b0c-838e-85a36957460b","http://resolver.tudelft.nl/uuid:b677e5b1-cd5c-4b0c-838e-85a36957460b","Quantitative updating of land use information on 1:50,000 scale topographic maps using spot landsat thematic mapper imagery","van der Laan, F.B.; Meijer, P.G.","","1989","This report gives the results of a study carried out in the Remote Sensing Department of the National Aerospace Laboratory (NLR) in The Netherlands concerning the updating of land use on topographic maps using remote sensing information. It was found that SPOT multispectral imagery is less suitable for land use classification purposes than Landsat Thematic Mapper data due to the lack of a spectral band in the mid-infrared. The most promising approach for updating land use information on topographic maps on a raster image processing system as available at NLR, seems to be the use of digitized topographic map information at 10-m resolution together with Landsat Thematic Mapper data resampled to the same resolution. In this report, maps of change in land use are given. Such end product maps can technically be produced at a rate of 1-2 maps per week for all 120 maps sheets of The Netherlands. The main technical problem in the production of maps is the limited availability of cloudfree (summer) imagery. This availability has statistically been assessed to be once per two years on the average.","satellite imageiry; thematic mappers (Landsat); Spot (French Satellite); mapping; land use; classifications; topography; image resolution; image processing; raster scanning; geometric rectification (imagery)","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:a4a4a577-383f-4bcd-b914-1692af12f1a1","http://resolver.tudelft.nl/uuid:a4a4a577-383f-4bcd-b914-1692af12f1a1","Feedback signal of a seismic vibrator","Baeten, G.J.M.; Fokkema, J.T.; Ziolkowski, A.M.","","1989","","data acquisition data processing feedback geophysical methods ground force seismic methods vibrators 20 Applied geophysics","en","conference paper","Society of Exploration Geophysicists, International Meeting and Exposition","","","","","","","","","","","","",""
"uuid:e665e23f-cd4c-4168-8f9d-5e5e5df6867f","http://resolver.tudelft.nl/uuid:e665e23f-cd4c-4168-8f9d-5e5e5df6867f","Variability in mode choice in home-to-work travel","Van Vuren, T.; Bovy, P.H.L.","","1989","","Conference 8525 journey to work 0621 transport mode 1145 decision process 2248 selection 9072 itinerary 0699 netherlands 8078 occupation work 2271 planning 0133 journey time 0697 traffic survey 0676 program computer 8646 modal split 0675 Traffic and trans","en","conference paper","PTRC EDUCATION AND RESEARCH SERVICES LTD, GLENTHORNE HOUSE, HAMMERSMITH GROVE, LONDON, W6 0LG, UNITED KINGDOM (18.00#)","","","","","","","","","","","","",""
"uuid:43a8bf26-7c22-4edb-8930-6453313abc88","http://resolver.tudelft.nl/uuid:43a8bf26-7c22-4edb-8930-6453313abc88","ROUTEKEUZE VAN REIZIGERS.; ROUTECHOICE OF TRAVELLERS","Bovy, P.H.L.","","1989","","flow; Selection 9072 itinerary 0699 road network 2743 attitude psychol 2267 behaviour 9001 motivation 2295 decision process 2248 mathematical model 6473 forecast 0122 traffic flow 0671 theory 9078 Traffic theory (71) traffic and transport planning (72) accident","nl","book chapter","","","","","","","","","","","","","",""
"uuid:cfe3e30a-7d58-47f0-a86a-ee2c8fb68d12","http://resolver.tudelft.nl/uuid:cfe3e30a-7d58-47f0-a86a-ee2c8fb68d12","The effect of thiosulphate and other inhibitors of autotrophic nitrification on heterotrophic nitrifiers","Robertson, L.A.; Cornelisse, R.; Zeng, R.; Kuenen, J.G.","","1989","","Nitrification Heterotrophy Nitrification inhibitor Thiosulfates Hydroxylamine Autotrophy Mixotrophy Microorganism culture Batch process Chemostat Pseudomonas denitrificans Nitrification Heterotrophie Inhibiteur nitrification Thiosulfate Allylthiouree Hydr","en","journal article","","","","","","","","","","","","","",""
"uuid:73e99413-ef5c-4f48-93ef-3fbe2e6fef45","http://resolver.tudelft.nl/uuid:73e99413-ef5c-4f48-93ef-3fbe2e6fef45","The generation of a random process with a spectrum with fractional power","Noback, R.","","1988","A method is described to generate a random Gaussian process with a power spectrum with fractional power, for example the von Karman spectrum for atmospheric turbulence. The method uses digital filtering of a white noise process and it is based on the use of fractional derivatives. An algorithm to generate the process is presented.","statistical analysis; power spectra; atmospheric turbulence; gust loads; random processes; Von Karman equation; digital filters; probability density functions; white noise; weighting functions; differential calculus; integral calculus","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:7a83a6af-891e-4a30-b602-5eed07e2b37b","http://resolver.tudelft.nl/uuid:7a83a6af-891e-4a30-b602-5eed07e2b37b","Determination of the resin content in DSC prepreg samples using thermal analysis techniques","van der Hoeven, W.","","1988","Determination of the heat of reaction during the cure of the resin in carbon fibre reinforced epoxy prepreg using Differential Scanning Calorimetry resulted in considerable scatter with the measurements were done on prepreg samples. Differences in resin content between the DSC prepreg samples were suspected to be the main cause of the observed scatter. During this programme a procedure was evaluated to determine the resin content in the DSC prepreg samples. In this procedure the resin fraction in the samples is thermally decomposed in an non oxidising atmosphere. It is based on the assumption that there is a fixed ratio between the weight of resin in the original prepreg sample and the weight of cabonaceous residue that remains after complete decomposition of the resin. The weight during thermal decomposition is determined using Thermogravlmetric Analysis. The results indicated that the scatter in the heat of reaction data was reduced significantly when the suggested procedure was applied. However, further evaluation of the procedure is required to establish the validity of the predicted values for resin content and heat of reaction.","epoxy resins; curing; process heat; heat measurement; prepregs; error analysis; heat of formation; thermal analysis","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:1504d7ab-4c99-4759-91bc-eddad58cb672","http://resolver.tudelft.nl/uuid:1504d7ab-4c99-4759-91bc-eddad58cb672","Hoofdcomponenten-analyse van meetreeksen","Valk, C. de","","1988","","gegevensverwerking; data processing; numerieke analyse; numerical analysis; zeestand; sea level; getij-analyse; tidal analysis","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:67e5692c-0905-4ddd-8487-37fdda9af6b4","http://resolver.tudelft.nl/uuid:67e5692c-0905-4ddd-8487-37fdda9af6b4","Rock slopes and gravel beaches under wave attack","van der Meer, J.W.","Bijker, E.W. (promotor)","1988","Abstract not available","rubble mound breakwaters; rubble mound revetments; rock beaches; gravel beaches; coastal processes; coastal engineering; water-retaining structures","en","doctoral thesis","Delft Hydraulics Laboratory","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:e23c0e5b-1407-4239-8457-49f93abfe870","http://resolver.tudelft.nl/uuid:e23c0e5b-1407-4239-8457-49f93abfe870","Stormvloedkering Oosterschelde: Analyse turbulentiemetingen","Flokstra, C.","","1988","","Oosterschelde; Zeeland; turbulentie; turbulence; gegevensverwerking; data processing","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:a1f4367b-b518-4644-86e1-0fbc20f3a045","http://resolver.tudelft.nl/uuid:a1f4367b-b518-4644-86e1-0fbc20f3a045","Determination of macro subsurface models by generalised inversion","Van der Made, P.M.","Berkhout, A.J. (promotor)","1988","","generalized inversion; seismic interpretation; seismic processing; travelling inversion; macro model; raytracing","en","doctoral thesis","N.K.B. Offset bv","","","","","","","","Applied Sciences","","","","",""
"uuid:a9fb7696-fa6a-410c-abb5-c804e86c4f7e","http://resolver.tudelft.nl/uuid:a9fb7696-fa6a-410c-abb5-c804e86c4f7e","Feasibility of a Dutch process for microbial desulphurization of coal","Bos, P.; Huber, T.F.; Luyben, K.C.A.M.; Kuenen, J.G.","","1988","","Coal MICROORGANISMS Applications BIOTECHNOLOGY Research COAL PREPARATION PYRITES Removal SULFUR Removal Microbial desulfurization inorganic sulfudic minerals sink float process 524 (Solid Fuels) 802 (Chemical Apparatus and Plants, Unit Operations, Unit Pr","en","journal article","","","","","","","","","","","","","",""
"uuid:9f22cd72-4b1b-48b3-9b9b-1f68988545e3","http://resolver.tudelft.nl/uuid:9f22cd72-4b1b-48b3-9b9b-1f68988545e3","A model for cross-shore sediment transport","Stive, M.J.F.","","1988","","rapport SDL / Klastische sedimenten: afzettingen / Clastic sediments: deposits TLN / Kustprocessen / Coastal processes; Sediment Transport","en","conference paper","Delft Hydraulics","","","","","","","","","","","","",""
"uuid:bdc1a6a9-801f-43f1-8a17-d225dced03d4","http://resolver.tudelft.nl/uuid:bdc1a6a9-801f-43f1-8a17-d225dced03d4","Telescience and image processing for PODI: An analysis and a proposal for a simulation set-up","Kuijpers, E.A.","","1987","","Micro-gravity applications; Image processing; Algorithms; Data compression; Flow visualization; Schlieren photography; Focussing; Physics and chemistry experiment in space; Fluid flow","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:21c88ba7-a5ec-48b6-8098-f6b9e8ddfa76","http://resolver.tudelft.nl/uuid:21c88ba7-a5ec-48b6-8098-f6b9e8ddfa76","On the feasibility of applying robot vision to proximity extraction based on one camera in space","Kuijpers, E.A.","","1987","Existing robot vision algorithms are evaluated for application in space for proximity extraction using one camera. Lighting conditions and image formation differ from earth conditions which have implications. An algorithm is derived which is based on the use of run coding, tracking and a least mean error method. Problems concerning hardware implementation are discussed.","Robots; Computer vision; Real time operation; Parallel processing (computers); Map matching guidance; Hermes manned spaceplane; Attitude control; Pattern recognition; Algorithms; Data reduction; Coding; Scene analysis; Convexity","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:405bc6f2-b552-41d3-919a-e1d11144dd1b","http://resolver.tudelft.nl/uuid:405bc6f2-b552-41d3-919a-e1d11144dd1b","CADISS: a multi-processor system for image compression/decompression on board scientific satellites","Roefs, H.F.A.; Monkel, A.","","1987","Project CADISS has produced an elegant breadboard of a system for compression (and -if required- decompression) of signals generated by imaging sensors on-board scientific satellites. The performance, specifications and physical characteristics of CADISS are the result of trade-offs on function (algorithms), technology, through-put speed and cost. CADISS is a low-power, (micro) programmable multi-processor prototype system with interfaces to a Remote Terminal Unit (for downloading of software), to image data generating instruments and to the on-board formatter. This report describes the concept of CADISS, its algorithms, its technology and the test results. It should be noted that the design was frozen mid 1983, using technology which was state-of-the-art at that time.","Onboard data processing; Remote sensing; Data compression; Data reduction; Algorithms; Coding; Image processing; Image reconstruction; Satellite imagery; Multiprocessing (computers); Breadboard models","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:66929806-2b33-42ac-94c3-fc584bf16be6","http://resolver.tudelft.nl/uuid:66929806-2b33-42ac-94c3-fc584bf16be6","An evaluation of data compression algorithms","Hogendoorn, R.A.","","1986","Non-specialist users of data compression often encounter difficulties with the selection of an algorithm, suited for their application. This report tries to facilitate this task by providing the necessary information about the algorithms. Four established, at the National Aerospace Laboratory available algorithms were selected for this study: - The Chaturvedi and Melzer algorithms - Straight-line approximation - Spline approximation with image segmentation. As a related goal, the relative performance of these algorithms has been investigated. Tests were done with Meteosat and Landsat Thematic Mapper images.","Data compression; Signal encoding; Imaging techniques; Transformation (Mathematics); Algorithms; Segments; Satellite imagery; Spline functions; Imag. processing; Signal distortion; Error analysis","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:47632d61-9e08-4721-bfbe-ab7b3e02e6ea","http://resolver.tudelft.nl/uuid:47632d61-9e08-4721-bfbe-ab7b3e02e6ea","Preliminary design and analysis of procedures for the numerical generation of 3D block-structured grids","Boerstoel, J.W.","","1986","An analysis of various alternative approaches in grid generation is presented. A grid-generation procedure for complex aircraft configurations could be based on a combination of three subprocesses, . decomposition of the flow domain into about 100 hexahedronal blocks, . trilineair transfinite interpolation to generate initial grid point distributions, and . elliptic mesh-size tuning and smoothing. To get insight into this procedure, mathematical models of these three subprocesses were worked out and analyzed. The results of the analysis are technical concepts required or desirable in the grid-generation procedure. These concepts are presented.","Computational fluid dynamics; Aircraft configurations; Finite volume method; Euler equations of motions; Parallel processing (computers); Elliptic differential equations; Computational grids; Three dimensional flow; Hexahedrons; Dlrichlet problem; Cells","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:6237d520-f1eb-4cb1-a3a3-4a3b6a3f2def","http://resolver.tudelft.nl/uuid:6237d520-f1eb-4cb1-a3a3-4a3b6a3f2def","Procesbeschrijvende modellering van de waterkwaliteit van de zuidnederlandse noordzeekust","Markus, A.A.","Deltares","1986","","hydrodynamica; hydrodynamics; Noordzee; seawater quality; sedimenttransportprocessen; sediment transport processes; stoftransportmodellen; transport models; waterkwaliteitsmodellen; water quality models; zeewaterkwaliteit","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:39ed6506-9306-468a-95e2-8ba31fa7aaf2","http://resolver.tudelft.nl/uuid:39ed6506-9306-468a-95e2-8ba31fa7aaf2","Data compression in computational fluid dynamics","Hoogendoorn, R.A.","","1986","Computations in numerical fluid dynamics create large sets of data. Transmission of these datasets requires a high speed communication link or, alternatively, the use of data compression techniques. This report describes the results of a study to the applicability of data compression techniques. An algorithm is designed that yields favourable results with sample data.","Data reduction; Data compression; Data transmission; Transmission efficiency; Coding; Differential pulse code modulation; Error analysis; Markov processes; Algorithms; Computational fluid; Computational grids; Distortion; Costs; Runtime (computers)","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:8c9b9eef-5563-4d11-8450-cd0765284a37","http://resolver.tudelft.nl/uuid:8c9b9eef-5563-4d11-8450-cd0765284a37","Characterization of Chromatin Distribution in Cell Nuclei","Young, I.T.; Verbeek, P.W.; Mayall, B.H.","","1986","","quantitative microscopy; image processing; texture measures; pattern recognition; image measurement","en","journal article","Wiley","","","","","","","","","","","","",""
"uuid:b96efeed-53a6-45c6-83ff-8373cac78fbd","http://resolver.tudelft.nl/uuid:b96efeed-53a6-45c6-83ff-8373cac78fbd","Image data reduction with splines and segmentation with emphasis on earth observation","de Pagter, P.J.; Renes, J.J.","","1985","This report describes the results of a feasibility study on image data compression using spline approximation and image segmentation, with emphasis on earth observation applications. The main subjects dealt with, are: the data compression method, implementation aspects, the test software and test results. It appears that a reduction factor of 5 to 10 is possible without significant visual loss of information. The spline approximation involves a convolution process. A simple hardware implementation is described. Benchmark tests yielded an accuracy of 5 bits. Possible improvements are discussed.","satellite imagery; image processing; data compression; spline functions; edges; segments; cluster analysis; algorithms; data reduction; computer programs; performance tests","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:78577b4d-47c9-46a0-97a3-893558bc5be6","http://resolver.tudelft.nl/uuid:78577b4d-47c9-46a0-97a3-893558bc5be6","Final report on a research facility for critical point phenomena in microgravity","Huijser, R.H.","","1985","A conceptual design for a research facility for critical point phenomena in Spacelab's microgravity environment is presented. In the critical point facility (CPF) four experiments can be performed in parallel. The CPF consists of five parts: four experiment units and a service module. The experiment units offer the user-provided (instrumented) experiment cells a periphery consisting of a high precision thermostat and experiment-dedicated electronics/diagnostics. The service module provides an ambient, optical subsystems, and electrical subsystem with microprocessor(s), power supplies and the mechanical structure shared by the four units. Moreover, it establishes the various interfaces towards the Spacelab system.","space processing; spacelab payloads; space station; equipment specifications; physics and chemistry experiment in space; fluid dynamics; critical point; research facilities; optical equipment; microgravity applications; user requirements; interfaces","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:3d6cd9b0-08e7-421c-993b-32d13406b0aa","http://resolver.tudelft.nl/uuid:3d6cd9b0-08e7-421c-993b-32d13406b0aa","Hydro-morphological study Douro estuary part 9: Hydro-morphological mathematical model","Olthof, J.; Verhagen, H.J.","TU Delft","1984","As a continuation of the initial study on the morphology of the Douro mouth, performed by Hydronamic in 1982, the Administracao dos Portos do Douro e Leixoes commisioned Hydronamic bv to construct a mathematical hydro-morphological model of the Douro mouth. This second part of the morphological study of the Douro mouth is divided into three phases, viz.: a. Additional measurements during a period of low river discharge. The results are presented in report part 7. b. A statistical analysis of the topography of the Cabedelo from 1872 until 1983. The results are presented in report part 8. c. The construction of a mathematical model of the Douro mouth. This report presents a description of the component parts, the operation and the calibration of the mathematical model. The used model is a probabilistic model using stochastic techniques to simulate the variations of the Cabedelo.","morphological model; sand spit; river bar; stochastic process; tidal inlet; simulation model","en","report","Hydronamic","","","","","","","","","","","","",""
"uuid:2488f2e8-e421-4246-bc4e-ccd1e88399ee","http://resolver.tudelft.nl/uuid:2488f2e8-e421-4246-bc4e-ccd1e88399ee","Image quality criteria with emphasis on criteria for remote sensing imagery","van der Lubbe, J.C.A.","","1984","Image quality plays an important role on the various levels of image processing. However, until now it lacks a survey of methods for the evaluation of image quality. With a grant of the Netherlands Agency for Aerospace Programs (NIVR) a study was performed with respect to the quantification of image quality. Some results of this study are reflected in this report. This report is meant as a guide for the measurement of image quality in its various facets. In addition to already common approaches to image quality assessment also attention is paid to less often considered aspects of image quality like texture and edge quality.","image processing; remote sensing; edges; imaging techniques; multi spectral photography; accuracy; image analysis; quality; criteria; pattern classification; datacompression; distortion; entropy; coding; gray scales; textures","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:497bbaaa-8b4c-4714-a5ed-83d01deec233","http://resolver.tudelft.nl/uuid:497bbaaa-8b4c-4714-a5ed-83d01deec233","IRAS end-to-end datasystem","van Holtz, R.C.","","1983","The development of a complex data system involving many disciplines requires a strict control from the early phases of a project onwards, to ensure that all requirements are met within the technical and financial constraints. To exercise such control becomes more difficult when different institutes/organizations contribute to the system, the more so when they are of different nationalities. The data system for the Infra Red Astronomy Satellite (IRAS) is typically such a large, complex system involving a software effort of American, English and Dutch institutes, totalling over 150 manyears. This paper describes the End-to-End Data System approach used during the design and development phases of the project. It assesses its effectiveness in the light of 6 months operational experience and indicates where a further application of the principle probably would have been beneficial.","data system; configuration management; data collection platforms; infrared astronomy satellite; computer systems programs; data acquisition; data processing; ground stations; uplinking; downlinking","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:7001bac6-4a4f-49e0-b2f8-a10a81487a0d","http://resolver.tudelft.nl/uuid:7001bac6-4a4f-49e0-b2f8-a10a81487a0d","Improvement of the superposition of geographical data in KOSMOSS","van Popta, R.G.","","1983","Since mid 1982, the Royal Netherlands Meteorological Institute KNMI has to its possession a ground station (KOSMOSS), which receives and processes high resolution imagery, acquired from Tiros-N type weather satellites. To determine the weather positions in the images, the ground station is equipped with the capability to superpose, in real time, geographical data (e.g. coast contour lines) to the weather images. The applied superposition shows some imperfections: -the position deviation of the superposed geographical data is visible and disturbing, and -the superposed coast contour lines are too coarse. This report describes the results of an investigation to remove that imperfections through the application of more accurate orbital data, and more detailed coast contour lines. Further, thanges in the KOSMOSS software are proposed, of thiwh it is expected that they yield a substantial superposition improvement without decreasing the image production capacity of the KOSMOSS ground station.","satellite photography; TIROS-N satellite; superposition (mathematics); ground stations; orhital position prediction; position errors; image processing; meteorological charts; computer aided mapping; shorelines; computer programs; coordinate transformations","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:5fb7e72d-9e47-4792-99d6-377ce67df954","http://resolver.tudelft.nl/uuid:5fb7e72d-9e47-4792-99d6-377ce67df954","Comparison of a jump-diffusion tracker with a Kalman tracker: an evaluation with emphasis on air traffic control","Blom, H.A.P.","","1983","A sophisticated starting point for a probabilistic approach to the radar tracking problem is a Markov jump-diffusion model for the aircraft dynamics, its control and the radar measurements. From nonlinear filtering theory, a closed form description of the evolution of the conditional distribution of this Markov process can be obtained. This jump-diffusion filter, however, is infinite dimensional and approximations are necessary for algorithmic implementation. The indirect approach of approximating the jump-diffusion by a diffusion leads to a Kalman-like tracker. The recently developed approach of approximating the jump-diffusion filter directly, leads to a bank of interacting Kalman-like trackers. The report is directed to the evaluation of tracking algorithms that are based on these two approaches. Their results are compared in an air traffic control environment. It is concluded that the jump-diffusion' tracker performs considerably better than the Kalman tracker.","tracking position; tracking filters; jump detection; kalman filters; diffusion theory; monte carlo method; markov processes; air traffic control; radar tracking; clutter; surveillance radar; flight paths; turning flights; aircraft manoeuvres; algorithms; standard deviation","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:7b65c974-2297-4d3c-a6c3-5e27eb00dce3","http://resolver.tudelft.nl/uuid:7b65c974-2297-4d3c-a6c3-5e27eb00dce3","On-line superposition of geographical data to tiros-N type meteorological satellite images","van Popta, R.G.","","1982","A method is described, by which geographical data are superposed in real time to the high resolution HRPT imagery, received from the TIROS-N series satellites. The superposition of additional geographical data is important for an accurate position determination of the image data. Computations necessary for the transformation, conversion and sorting of the additional data are performed on-line (i.e. during a period of five minutes, immediately before the image production). The superposition is based on orbit predictions with a circular model, which has been modified to decrease the geographical data position deviations in the image.","Satellite borne photography; Meteorological charts; Ground stations; Meteorological services; Tiros N satellite; Image processing; Casini projection; Superposition (mathematics); Contours; Photomaps; Real time operation; Pipelining (computers); Shorelines; Computer aided mapping","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:9f2674f3-5286-4204-b6e6-accc26d8f49f","http://resolver.tudelft.nl/uuid:9f2674f3-5286-4204-b6e6-accc26d8f49f","Design and Implementation of the Delft Image Processor DIP-1","Gerritsen, F.A.","","1982","This report describes the design and implementation of the Delft Image Processor DIP-1. In this pipelined image-processing computer, a number of storage and processing modules which operate concurrently and synchronously (hardware floating-point ALUs and multiplier, conversion table and cellularlogic table) may be interconnected in a reconfigurable way by microprogrammable data selectors to form a pipeline. The machine is especially suited (though not completely devoted) to perform arithmetical and logical neighbourhood operations (neighbourhood sizes up to 16x16 inclusive) and cellular-logic operations (neighbourhood size 3x3). An introduction is given to some fundamentals of image processing, discussing the computational complexity of a number of frequently used basic image transformations. A number of high-speed computer architectures and image-processing machines are briefly described (Cellscan, GLOPR, PICAP-1, Cytocomputer, MPP, ICL-DAP, CLIP 4, AP-120B). The influence of the input/output structure of processor arrays on their performance is analyzed. Suggestions are given for the improvement of current processor-array implementations. The DIP-1 hardware and support software (micro-assembler, -linker/loader, -debugger, run-time support system and diagnostics) are described. Examples are given of the operation of DIP-1 by discussing the way in which a number of image transformations were implemented as DIP-1 microprograms. An evaluation is given of the machine's hardware and software. Suggestions are offered for the further development of pipelined image-processing hardware.","image processing; parallel processing (computers); pipelining (computers); design of computers; microprogramming; debugging; array processors; processor arrays; floating-point hardware; image enhancement; spatial filtering; neighbourhood operations; cellular-logic operations; convolution; erosion; dilation; skeletonization","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:28dfac74-d940-46ae-8982-aff81a07640a","http://resolver.tudelft.nl/uuid:28dfac74-d940-46ae-8982-aff81a07640a","Mathematisch-fysische en numeriek-wiskundige problemen in TOW-B: Een inventarisatie en voorstel van onderzoek","Verboom, G.K.; Os, A.G. van","","1982","","sedimenttransportprocessen; sediment transport processes; stromingsmodellen; flow models; numerieke modellen; numerical modelling","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:d66ee86e-b495-4207-82bc-1062cc49775a","http://resolver.tudelft.nl/uuid:d66ee86e-b495-4207-82bc-1062cc49775a","Detection-filter representations for Markov jump diffusions","Blom, H.A.P.","","1982","The problem considered is filtering for Gaussian observations of linear differential systems that are driven by both Wiener processes and marked Poisson point processes. Well-known representations of the MMSE-filter for such a Markov jump-diffusion are a differential for the evolution of its conditional density or differentials for all its conditional central moments. For Markov jump-diffusions that embed finite state Markov processes some transformed filter-representations have been derived. They throw a new light on the problem of filtering for Markov jump-diffusions. A resulting finite dimensional approximate MMSE-filter shows promising advantages above previous approximations. Some illustrative examples are given.","Information theory; Tracking filters; Nonlinear filters; Kalman filters; Wiener filtering; Markov chains; Markov processes; Counting; Differential equations; Diffusion theory; Jump detection; Stochastic processes; Poisson density functions; Martingales; Ito differential equation; Fokker-Planck equation","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:716d9c1d-4e29-409d-9032-8fcaf29722d4","http://resolver.tudelft.nl/uuid:716d9c1d-4e29-409d-9032-8fcaf29722d4","A method for measuring take-off sjid landing performance of aircraft, using an inertial sensing system","Pool, A.; Simons, J.L.; Wensink, G.J.H.; Willekens, A.J.L.","","1980","The STALINS method for measuring take-off and landing trajectories is briefly described and results of flight tests made in 1978-80 are discussed. The method now meets the requirements to which it was designed, and a few improvements in the hardware and software are being finalized. The method is expected to be ready for operational use in the course of 1981.","Flight tests; Flight test instrumentation; Aircraft performance; Aircraft landing; Take-off runs; Data processing; Requirements; Inertial platforms","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:d80f8a96-6fa9-4491-9074-9a192085bfaf","http://resolver.tudelft.nl/uuid:d80f8a96-6fa9-4491-9074-9a192085bfaf","On martingales and recursive optimal state estimation","Moek, G.","","1980","This report constitutes the deposit of a study of literature on martingales and estimation theory. Some experimental work concerning the efficiency of an estimation algorithm based on martingales is also described. The investigation has been guided by the intent to investigate martingales in relation to the Kalman and Kalman-Bucy filters and to consider the implications for the practical filtering work at NLR. The optimal estimation problem and the Kalman and Kalman-Bucy filters are recalled. Martingales are defined and some examples are given. A number of theoretical results concerning martingales and linear as well as nonlinear estimation theory is summarized in an innovation and presentation theorem. Based upon this a simple derivation of the Kalman and Kalman-Bucy filters is sketched. Reference is made to more general estimation problems (counting process observations, martingale noise) which can be solved along the same lines. Finally, a wide-sense martingale approach to linear estimation is discussed. For a class of signals,' comprising those studied in the Kalman and Kalman-Bucy problem, recursive estimation equations for the signal based on observations in either white or coloured noise are described. A more general model for coloured noise is given. By means of numerical simulation of some model problems the Kalman filter and the wide-sense martingale approach are compared.","Stochastic processes; Estimation; Linear systems; Nonlinear systems; Martingales; Least squares method; Recursive functions; Kalman filter; State vector; White noise; Theorem proving; Coloured noise; Representation theorem; Innovation theorem","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:7c91d66d-bf95-40c4-b82b-5a167c515e00","http://resolver.tudelft.nl/uuid:7c91d66d-bf95-40c4-b82b-5a167c515e00","Introduction to chemical process technology","Van den Berg, P.J.; De Jong, W.A.","","1979","","chemical processes","en","book","Delft University Press; D. Reidel Publishing Company","","","","","","","","Applied Sciences","","","","",""
"uuid:4c92dec1-e5fb-4463-8058-1d6d1eedd510","http://resolver.tudelft.nl/uuid:4c92dec1-e5fb-4463-8058-1d6d1eedd510","Study of an attitude acquisition measurement technique using the ESA starmapper: Part I: Executive summary","van Woerkom, P.Th.L.M.; Sonnenschein, F.J.","","1979","The study deals with the rotational motion of a spin-stabilized spacecraft equipped with a V-slit sun sensor and the X-slit ESA starmapper. Assuming complete absence of an a priori attitude estimate, it is required to produce a rough estimate of the spacecraft attitude motion. It was assumed that sensor misalignments, external torques and nutation are negligible. The estimation algorithm is described, together with datails of its software implementation. Simulations using pseudomeasurements (generated by a truth model) were carried out for a number of parameter combinations, including nutation, sensor misalignments, different spin rates, telematry resolution- and spacecraft assymetry. In all simulations the spacecraft attitude could be estimated with errors of no more than about one arcminute. Apparently, the algorithm is robust with respect to parameter variations.","Satellite attitude control; Spacecraft motion; Spin stabilization; Satellite rotation; Spin dynamics; Mapping; Star trackers; Solar sensors; Stardistributation; Optical scanners; Astronomical catalogs; Data acquisition; Estimating; Computer programs; Algorithms; Batch processing; Telemetry; Signal processing; Pattern recognition","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:6a8c3455-f24a-4f84-8449-e6d638a794df","http://resolver.tudelft.nl/uuid:6a8c3455-f24a-4f84-8449-e6d638a794df","Study of an attitude acquisition measurement technique using the ESA starmapper: Part two: final report","van Woerkom, P.Th.L.M.; Sonnenschein, F.J.","","1979","The study deals with the rotational motion of a spin-stabilized spacecraft equipped with a V-slit sun sensor and the X-slit ESA starmapper. Assuming complete absence of an a priori attitude estimate, it is required to produce a rough estimate of the spacecraft attitude motion. It was assumed that sensor misalignments, external torques and nutation are negligible. The estimation algorithm is described, together with datails of its software implementation. Simulations using pseudomeasurements (generated by a truth model) were carried out for a number of parameter combinations, including nutation, sensor misalignments, different spin rates, telematry resolution- and spacecraft assymetry. In all simulations the spacecraft attitude could be estimated with errors of no more than about one arcminute. Apparently, the algorithm is robust with respect to parameter variations.","Satellite attitude control; Spacecraft motion; Spin stabilization; Satellite rotation; Spin dynamics; Mapping; Star trackers; Solar sensors; Star distributation; Optical scanners; Astronomical catalogs; Data acquisition; Estimating; Computer programs; Algorithms; Batch processing; Telemetry; Signal processing; Pattern recognition","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:c9ac9ce3-e790-46b0-b893-09b2cb3fedce","http://resolver.tudelft.nl/uuid:c9ac9ce3-e790-46b0-b893-09b2cb3fedce","Accurate spacecraft attitude estimation with the ESA starmapper","van Woerkom, P.T.L.M.; Traas, C.R.","","1978","The ESA starmapper is an optical-electronic instrument with an array of photo-sensitive slits, to ""be strapped down to spinning spacecraft. Recorded times of passage of known stars over these slits, together with mathematical models for spacecraft dynamics and starmapper instrument, provide the basis for recursive estimation of the spacecraft attitude. The estimator is robust with respect to numerical errors as well as modelling errors. It is found advantageous to treat certain parameters as consider parameters. In the case of small nutation angles, attitude estimates can be obtained with an accuracy of about one arcminute. Degradation in accuracy occurs for larger nutation angles.","space vehicles; attitude control; sensors; modelling; stochastic systems; state estimation; Kalman filters; filtering; data processing; error compensation","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:04d72385-fe53-4a2e-8c4c-03a44ab26dad","http://resolver.tudelft.nl/uuid:04d72385-fe53-4a2e-8c4c-03a44ab26dad","Aircraft design loads due to non-stationary atmospheric turbulence patches","Noback, R.","","1978","In this report it is assumed that atmospheric turbulence appears in patches and that within the patches the turbulence can be described.as a modulated Gaussian process. The patch lengths have a certain probability density function. Load exceedance curves and design loads for various aircraft models for this turbulence model are compared with those obtained with the PSDturbulence model.","aircraft design; gust loads; atmospheric turbulence; turbulence; expectancy hypothesis; transfer functions; transient response; stochastic processes; statistical analysis; probability density functions; power spectra","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:7b5608f8-40dd-4863-a2fe-205eab44aade","http://resolver.tudelft.nl/uuid:7b5608f8-40dd-4863-a2fe-205eab44aade","Het meten van lichtreflecties van gewassen","Verhoef, W.","","1977","Measurement of an object's reflectance spectrum can provide information on its nature and qualities. However, one should thoroughly take into account the fact that in general the reflectance spectrum does not only depend on the object itself, but also on the parameters defining the measurement conditions, like the spectral and spatial distribution of the incident light and the direction in which the reflectance is being measured. Therefore, when comparing reflectance spectra of different objects, it is necessary to keep the conditions of measurement as constant as possible. With the NIWARS-fieldspectrometer this has been realised by performing a comparative and simultaneous measurement on the object and a diffuse reflecting reference target which is exposed to the same illumination. The influence of variations in light intensity is suppressed in this way. By keeping the spatial distribution of the irradiation and the direction of observation constant, one can obtain a set of reflectance spectra among which variations can only be attributed to differences of the properties of the objects. However, it is possible that one is particularly interested in the influence of parameters like the solar position and the view angle on the reflectance spectrum of a certain object. With a spectrometer set-up, designed for this purpose, measurements have been executed. They provided insight into the magnitude of the systematic variations associated with multispectral scanning of crops.","spectral reflectance; spectral signature; crop identification; spectrometers; multispectral scanning; near infrared; light (visible radiation); ground truth; signal processing","nl","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:5decf231-6a10-464e-8dc2-ff428c592028","http://resolver.tudelft.nl/uuid:5decf231-6a10-464e-8dc2-ff428c592028","A non-stationary model for atmospheric turbulence patches, for the prediction of aircraft design loads","Noback, R.","","1976","In this report a model for atmospheric turbulence is proposed. It is assumed that atmospheric turbulence appears in patches and that within the patches the turbulence can be described as a modulated Gaussian process. Statistical properties of this model and of atmospheric turbulence are compared. Using data from various sources a probability distribution function for patch lengths is derived and the relation between patch-intensity and patch length is investigated.","aircraft design; atmospheric turbulence; expectancy theory; gust loads; gusts; mathanatical models; power spectra; probability density functions; random loads; statisticall analysis; stochastic processes","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:2a85780c-6eb7-4d83-96b2-7fbd2d333d5c","http://resolver.tudelft.nl/uuid:2a85780c-6eb7-4d83-96b2-7fbd2d333d5c","Digital filtering methods, with applications to satellite attitude determination in the presence of modelling errors. Part I: theory","Traas, C.R.","","1976","The report reviews a number of digital filtering methods and gives examples of their application. The methods include batch processing methods and rec\irsive methods, not only in the classical formulations but also in square-root formulations. The latter formulations have the purpose to prevent certain numerical inconveniences. Much attention is paid to the treatment of modelling errors. An effective algorithm is proposed to detect filter divergence. The illustrations concern a number of realistic satellite attitude determination problems.","statistical analysis; satellite attitude; digital filters; estimating; algorithms; stochastic processes; adaption; square root filters; Kalman filters; errors; mathematical models; covariance","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:565f5002-5841-44c7-8a36-3c6b1ab43024","http://resolver.tudelft.nl/uuid:565f5002-5841-44c7-8a36-3c6b1ab43024","Equations for the response of an airplane to non-stationary atmospheric turbulence patches","Noback, R.","","1976","In this report a method is described to calculate the load exceedance curve for a linear system having a finite, modulated Gaussian process as input. The derivation is based on the use of ensemble averages, defined as the expected values at a certain point of time. The equations can be used for any airplane-transfer function for which the Power Spectral Density method is applicable.","gust loads; random loads; atmospheric turbulence; turbulence; statistical analysis; transfer functions; transient response; stochastic processes; aircraft design; power spectra; expectancy hypothesis","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:a56afe13-b7cb-4b66-b50d-5d1f4a5713c9","http://resolver.tudelft.nl/uuid:a56afe13-b7cb-4b66-b50d-5d1f4a5713c9","Comparison between the statistical discrete gust method and the power spectral density method","Noback, R.","","1975","The Power Spectral Density method and the Statistical Discrete Gust Method to calculate aircraft design loads due to atmospheric turbulence are compared qualitatively and quantitatively on the basis of the load exceedance curves for certain aircraft models. It is shown that both methods are related to each other, giving the same results for simple first and second order airplane models. It is concluded that the Statistical Discrete Gust method has no advantages and a number of disadvantages compared to the Power Spectral Density Method and is not suitable as an airworthiness requirement for the calculation of design loads.","gust loads; random loads; atmospheric turbulence; statistical analysis; poxver spectra; aircraft design; gusts; stochastic processes; discrete functions; dynamic response; aircraft reliability","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:024bf4f9-a3f3-4ed3-b15f-f285e29be5ab","http://resolver.tudelft.nl/uuid:024bf4f9-a3f3-4ed3-b15f-f285e29be5ab","Reproductie zouttoestand getijrivieren (IV): Voorbereidend onderzoek tweelingproeven","Rees, A.J. van","Deltares","1974","","data processing; density induced flow; dichtheidsstroming; gegevensverwerking; getijberekening; measuring methods; meetmethoden; Rotterdamse Waterweg; tidal computation; turbulence measurement; turbulentiemeting","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:e2130092-2eea-412f-ac5e-295ed40bbcf2","http://resolver.tudelft.nl/uuid:e2130092-2eea-412f-ac5e-295ed40bbcf2","An analytical model of the write and read process in digital magnetic recording with a linear recording medium","van-Herk, A.; Wesseling, P.","","1974","","magnetic-recording magnetic-storage-systems digital-magnetic-recording linear-recording-medium Karlqvist-expression positive-head-field magnetisation-distribution magnetic-field readback-signal read-write-process analytical-model","en","conference paper","IEEE, New York, NY, USA","","","","","","","","","","","","",""
"uuid:0039ec79-5aa3-4447-a9f8-22821b8d9d72","http://resolver.tudelft.nl/uuid:0039ec79-5aa3-4447-a9f8-22821b8d9d72","A study of surface treatment of aluminium alloy samples by measuring light stimulated electron emission and contact potential","Hartman, A.","","1973","Tests on samples anodized in a tartrate solution indicated that porefree thin oxide films prevent almost all light stimulated electron emission. Probably ""pores"" are requisite for emission. The c.p. and emission as function of the pickling time in chromic-sulfuric acid solution could be explained qualitatively. The micro etch pit structure with excellent bonding to adhesives was not a dominant factor with respect to the emission of the sample. The post-treatment, rinsing and drying, had a significant effect on the photo-emission of the surface.","light stimulated electron emission; pickling process; contact potential","en","report","Nationaal Lucht- en Ruimtevaartlaboratorium","","","","","","Campus only","","","","","","",""
"uuid:839877de-3662-4d5a-8ced-a2190f7a168a","http://resolver.tudelft.nl/uuid:839877de-3662-4d5a-8ced-a2190f7a168a","Reproductie zouttoestand getijrivieren (V): Numerieke aspecten gegevensverwerking getijgootonderzoek","Maiwald, K.","Deltares","1972","","computerprogramma's; data processing; density induced flow; dichtheidsstroming; gegevensverwerking; numerical analysis; numerieke analyse; Rotterdamse Waterweg; software","nl","report","Deltares (WL)","","","","","","","","","","","","",""
"uuid:4e193fb6-6d6f-4ed4-abd2-641b79840453","http://resolver.tudelft.nl/uuid:4e193fb6-6d6f-4ed4-abd2-641b79840453","Refurbishment of Residential Buildings: A Design Approach to Energy-Efficiency Upgrades","Konstantinou, T.; Knaack, U.","","","Refurbishing the existing building stock is an acknowledged issue in the building industry. Even though awareness has been raised, the design phase of refurbishment projects is often problematic. The decisions taken in the early stages of the design determine the final result; however, the assessment of the environmental performance only happens at the end of the design process.This paper discusses an approach to the designing of refurbishment projects, as a way to energy-efficiently upgrade the residential stock. Based on a case study multi-residential building of the post-war period in Germany, we assess the impact of the retrofitted building components into the environmental performance of the building. The different options are systematically organised into categories, creating a “toolbox”. The compilation of different “tools” composed the refurbishment strategy. In this way, the impact of the refurbishment was evaluated in the early design stages. The toolbox supported the decision-making process of the design, resulting in integrated strategies that improve the performance of the building.","refurbishment; residential building; energy upgrade; design process; decision making","en","journal article","Elsevier","","","","","","","","Architecture and The Built Environment","Architectural Engineering +Technology","","","",""
"uuid:ddf637fc-3fcf-47dc-8a54-913b0159d8e2","http://resolver.tudelft.nl/uuid:ddf637fc-3fcf-47dc-8a54-913b0159d8e2","Stochastic parameterization of convective area fractions with a multicloud model inferred from observational data","Dorrestijn, J.; Crommelin, D.T.; Siebesma, A.P.; Jonker, H.J.J.; Jakob, C.","","","Observational data of rainfall from a rain radar in Darwin, Australia, are combined with data defining the large-scale dynamic and thermodynamic state of the atmosphere around Darwin to develop a multicloud model based on a stochastic method using conditional Markov chains. The authors assign the radar data to clear sky, moderate congestus, strong congestus, deep convective, or stratiform clouds and estimate transition probabilities used by Markov chains that switch between the cloud types and yield cloud-type area fractions. Cross-correlation analysis shows that the mean vertical velocity is an important indicator of deep convection. Further, it is shown that, if conditioned on the mean vertical velocity, the Markov chains produce fractions comparable to the observations. The stochastic nature of the approach turns out to be essential for the correct production of area fractions. The stochastic multicloud model can easily be coupled to existing moist convection parameterization schemes used in general circulation models.","deep convection; radars/radar observations; cloud parameterizations; cumulus clouds; stochastic models; subgrid-scale processes","en","journal article","American Meteorological Society","","","","","","","2015-08-01","Civil Engineering and Geosciences","Geoscience and Remote Sensing","","","",""
"uuid:b6b9f68f-ad44-425f-91e3-67f9946092cc","http://resolver.tudelft.nl/uuid:b6b9f68f-ad44-425f-91e3-67f9946092cc","Lecture notes on ""The Role of Rivers to Mankind""","Berdenis van Berlekom, H.A.","NEDECO","1969","It is the task and target of River Engineering as a profession to provide all the tools for arriving at an optimum utilization of the potential resources, optimum in the sense of promoting the beneficial characteristics of the river and eliminating or at least checking the adverse qualities. To strive for this aim, there must be a scientific understanding of the complex pattern of natural forces that exert their influence; a general knowledge on what we call the phenomenon ""River"". These lecture notes try to provide this understanding. In specific, it will provide insight into the following subjects related to the ""River"": - The river's functions - The longitudinal profile - (In)dependent variables in the river valley - Water movement - Sediment movement - Bed formation in a straight river - Cross-sections of a river in a bend - Features of cross-sections in a long narrowed section - Rivers under natural conditions Furthermore, the lecture notes include some improvement schemes in which improvements are described step-by-step. This section especially focuses on regulation and normalization.","morphology; rivers; river engineering; river regulation; river management; river profile; hydraulics; river processes","en","report","Polish Academy of Sciences Institute of Hydro-Engineering, Gdańsk","","","","","","","","","","","","Selected Problems from the Theory of Simulation of Hydrodynamic Phenomena",""