"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:559a64dc-62ae-47ce-b9dc-4e2cb56b6e27","http://resolver.tudelft.nl/uuid:559a64dc-62ae-47ce-b9dc-4e2cb56b6e27","Effects of Artifact Age on Maven Dependency Resolution","Kuļikovskis, Gints (TU Delft Electrical Engineering, Mathematics and Computer Science)","Proksch, S. (mentor); Poulsen, C.B. (graduation committee); Delft University of Technology (degree granting institution)","2024","This study conducts an investigation of the challenges faced by aging projects in Maven Central, focusing on the issue of missing dependencies. Using the Maven Explorer indexer, we systematically examine the correlation between the age of a project and the frequency of dependency resolution failures. Our analysis reveals a notable trend: older packages in Maven Central are more likely to encounter dependency resolution issues compared to newer ones. A widespread cause that was identified is the reliance on repositories without Transport Layer Security (TLS). Through this research, we highlight the prevalent issues within the Maven Central ecosystem and also offer insights into common causes of dependency resolution failures. We advocate for uploading new versions of libraries to multiple repositories to mitigate these issues. This study reviews the current state of Maven Central and extends some of the findings to other package management systems, contributing to a broader discourse on software longevity and dependency management.","Maven Central; Dependency resolution; Software longevity","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:0734b9c6-888a-4e0d-a8b8-3f9c3c6dfc1e","http://resolver.tudelft.nl/uuid:0734b9c6-888a-4e0d-a8b8-3f9c3c6dfc1e","Finding your digital sibling: Grouping GitHub projects that share certain attributes based on interactions and activities","de Bruin, Rowan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Proksch, S. (mentor); Huang, S. (mentor); Olkhovskaya, Julia (graduation committee); Delft University of Technology (degree granting institution)","2024","This study explores the feasibility of categorizing GitHub projects based on their interactions and activities, aiming to assist both researchers and practitioners in navigating the vast landscape of open-source software. Through experiments and analysis, key attributes contributing to project categorization are identified, paving the way for effective grouping of projects in terms of interactions and activities. Findings indicate distinct clusters among GitHub projects, highlighting the influence of interactions and activities on project categorization. The study underscores the importance of refining grouping algorithms and improving project categorization methods for future research. Future work could involve developing user-friendly tools to facilitate project discovery and exploring correlations between interaction related metrics and project development dynamics. Overall, this study contributes to advancing our understanding of project categorization on GitHub, facilitating more efficient knowledge sharing and collaboration within professional fields.","GitHub; Interactions; Activities; Software; Project grouping","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:8ee75bcc-a42f-415c-a5e3-c01654100253","http://resolver.tudelft.nl/uuid:8ee75bcc-a42f-415c-a5e3-c01654100253","Development of Telemetry Ranging for Small Satellites: A Ranging technique for Delfi-PQ","Srikanth, Vikram (TU Delft Aerospace Engineering)","Speretta, S. (mentor); Menicucci, A. (graduation committee); Root, B.C. (graduation committee); Delft University of Technology (degree granting institution)","2023","Small satellite systems such as CubeSats and PocketQubes have strict requirements in terms of size, weight, and power available onboard. In light of these constraints, small satellite systems typically omit the inclusion of a ranging system due to its power and specialized hardware requirements. However, new research in satellite ranging invented the telemetry ranging class of techniques which do not place any such requirements on the satellite. The thesis explores the possibility of implementing a telemetry ranging technique on the Delfi-PQ satellite, which is already in orbit using only software modifications. The working and performance of the technique in determining the position of the satellite were explored using its engineering model and have positive implications for small satellite navigation and science.
The research adopts an exploratory case study approach, with the Mendix SECO, a low-code PaaS for enterprise application development, as the subject. Semi-structured interviews with developers from Independent Software Vendors (ISVs), service providers, and customers inform the study. Self-Determination Theory (SDT) is applied to structure and comprehend developers' motivators, considering intrinsic and extrinsic motivations.
The findings underscore the significance of intrinsic motivators, such as enjoyment, intellectual stimulation, and skill enhancement, in attracting developers to PaaS SECOs. Competence-related factors, like challenge and creativity, play a pivotal role. Extrinsic motivators, like knowledge exchange and community size, also contribute to initial participation, with SECO support being crucial.
In continued participation, intrinsic motivators remain vital, with skill maturation and the evolving nature of fun being emphasized. Learning new skills aligns with staying abreast of platform innovations, sometimes driven by external pressures. Developers become more aware of the SECO's offerings, with social events acting as supplementary motivators for connection and inspiration.
Extrinsic motivators, particularly in autonomous form, gain importance in sustained participation. Developers align with the platform's strategic direction, emphasizing satisfaction with the SECO and its innovative efforts. PaaS providers' reputation and engagement in social events further enhance developers' motivation.
The study reveals overarching similarities and differences between low-code and other SECO types, questioning assumptions about developer motivations. The dominance of intrinsic motivators aligns with previous research in proprietary SECOs, but nuances, such as the desire for fast and agile development, emerge as unique to PaaS SECOs.
Theoretical contributions include enriching understanding of developer motivation in PaaS SECOs, reevaluating the intrinsic-extrinsic binary scale, and offering insights into nuanced motivators through SDT. Practical implications suggest strategies for PaaS providers, emphasizing the importance of competence-related motivators and the ongoing need for intrinsic satisfaction.
In conclusion, this thesis contributes to both theoretical and practical aspects of developer motivation in PaaS SECOs, paving the way for future research and strategies to attract and retain developers in evolving software ecosystems.","PaaS; Software Ecosystem; Developer Motivation; SECO actor; self-determination theory","en","master thesis","","","","","","","","","","","","Management of Technology (MoT)","",""
"uuid:d873d66b-b316-42ac-b213-91eda321a58f","http://resolver.tudelft.nl/uuid:d873d66b-b316-42ac-b213-91eda321a58f","Development of a Software Architecture for a Reconfigurable Aircraft Design System","Müller, Lukas (TU Delft Aerospace Engineering)","la Rocca, G. (mentor); Hoogreef, M.F.M. (mentor); van Paassen, M.M. (graduation committee); Delft University of Technology (degree granting institution)","2023","Aircraft design systems are frequently used to synthesize aircraft designs. However, it is inherently difficult to reconfigure these software systems to facilitate a broader range of design studies (e.g. optimization or sensitivity studies) and to address follow-up questions about the synthesized aircraft designs. This report presents an investigation into the feasibility of developing a reconfigurable aircraft design system.
Firstly, the issues preventing current aircraft design systems from being used in a reconfigurable way are identified. These issues are closely tied to the intricate and tightly integrated nature of the source code that underpins these systems. This is unsurprising, given that these systems have typically been devised by experts in aircraft design (with varying expertise in software design) who are primarily interested in solving concrete design problems rather than creating sophisticated source code. In particular, the extensive design logic, characterized by high cyclomatic complexity, combined with cluttered and ambiguous design data structures, makes it challenging to comprehend and adapt the functioning of these systems.
An iterative development methodology is employed to come up with a software architecture aimed at mitigating the identified issues. A number of distinct architectural elements are devised that leverage a centralized semantic data management approach and a standardized interface for the formulation of modular analysis & sizing methods. Furthermore, a prototype aircraft design system that serves as a reference implementation for this architecture is developed. The prototype incorporates a graph database, a self-explanatory ontology (defining the semantics of the data stored in the database), and several abstract base classes for encapsulating the computation logic contained within typical analysis & sizing methods used during aircraft design studies. Concrete instances of these base classes are supposed to interact with the database through a well-defined endpoint interface, which includes extensive logging capabilities.
The prototype aircraft design system exhibits promising characteristics: The semantic data management approach facilitates the creation of a genuinely unambiguous and flexible data model. Simultaneously, it helps uncover inconsistencies and limitations in the employed analysis & sizing methods. Treating analysis & sizing methods as modular and nested instances of standardized classes appears to be the key to achieving reconfigurability. In addition to the unit-testability of these classes, the self-visualization features incorporated within can significantly enhance the comprehensibility and transparency of the analysis & sizing methods.
The architecture development, repository configuration, ontology formulation, and interface generation took a significant amount of time. Furthermore, some critical challenges surfaced, necessitating further investigation. Specifically, some of the employed analysis & sizing methods feature limitations and implicit assumptions that are required for a synthesis system but may need to be revised before being used in a reconfigurable design system based on the proposed architecture. In the end, there was insufficient time left for implementing a comprehensive set of analysis & sizing methods essential for materializing a thorough aircraft design loop. Therefore, achieving a fully functional aircraft design system prototype proved unattainable. Consequently, it was not possible to demonstrate that adopting the proposed architecture yields an aircraft design system that can be used in a reconfigurable way, despite indications that this could very well be the case...
ThymesisFlow is designed for the situation where a borrower is able access a lender's memory, and the lender not accessing that borrowed memory. Coherency problems arise in the case where both a lender of memory, as well as a borrower of memory write to the lender's memory.
This thesis proposes the use of the Apache Arrow in-memory data format to not only access memory in a near coherent fashion, but in a fully coherent fashion. This will allow compute clusters to more efficiently use memory resources, allow for applications to dynamically hotplug memory, and allow for data sharing without copying over ethernet connection.
The protocols devised in this thesis are able to create disaggregated Arrow objects, which are readable by all nodes in a cluster in a coherent fashion. The creation of these coherent disaggregated objects is the only performance penalty in making them coherent, after initialization all nodes use their local CPU caches to cache remote objects.
A working proof-of-concept has been created which is able to share Apache Arrow objects stored in the memory of a single node. It is also possible to create Arrow objects which span the memory of multiple nodes, allowing for objects bigger than the memory of a single node. The proof-of-concept was able to be run thanks to the setup provided by the Hasso Plattner Institute.","Zero-copy memory pooling; Compute clusters; Disaggregated memory; Memory disaggregation; OpenCAPI; ThymesisFlow; IBM; Arrow; Apache Arrow; Cache coherency; CPU cache; Memory sharing; SMP; Memory lending; Memory pooling; Data format; Systematic solutions; Software coherency; CXL; HPI; Big Data; memory access optimization; memory space","en","master thesis","","","","","","","","2023-09-20","","","","Electrical Engineering | Embedded Systems","",""
"uuid:a2b132e9-8d38-4553-8587-0c9e3341b202","http://resolver.tudelft.nl/uuid:a2b132e9-8d38-4553-8587-0c9e3341b202","BRiM: A Modular Bicycle-Rider Modeling Framework","Stienstra, Timo (TU Delft Mechanical, Maritime and Materials Engineering)","Brockie, S.G. (mentor); Moore, J.K. (mentor); Happee, R. (graduation committee); Delft University of Technology (degree granting institution)","2023","Bicycles have been studied extensively over the past 200 years, with mathematical models providing valuable insights into various aspects of bicycle dynamics and rider control. However, the lack of a common framework for creating and sharing bicycle-rider models hinders the development of advanced models, research reproducibility, and dissemination. This thesis addresses this gap by introducing BRiM: an open-source modular and extensible framework for creating Bicycle-Rider Models.
The modular setup of BRiM relies on a systematic approach to define a model and form the analytical equations of motion. For the involved analytical computations BRiM utilizes SymPy, a Computer Algebra System. The systematic approach consists of four stages. The first stage defines the objects in the system, such as symbols and bodies. Secondly, the kinematic relationships between the objects, such as angular velocities between reference frames, are established. The third and the fourth stages, which are order-independent, specify the loads and constraints acting upon the system. The decoupling BRiM required to achieve modularity is enabled through this systematic approach, because computations within a stage are mostly order-independent.
The core of BRiM employs the systematic approach within a unified framework for modeling mechanical systems in general. It describes a model using a tree representation, in which a model is defined as an aggregation of smaller submodels. The relationships between submodels are established by parent models, using interchangeable connections to accommodate complex relations, such as tyre models between the ground and a wheel. This application of submodels enables swapping and adding submodels, making the overarching model both modular and extensible. Actuation within BRiM can either be specified by attaching prespecified groups of loads to models and connections, or by utilizing the interface provided by the mechanics module in SymPy, which offers the flexibility to even manipulate equations in detail.
BRiM applies this generalized framework to create modular bicycle-rider models. Both a stationary bicycle and a modular bicycle based on Moore's convention of the Carvallo-Whipple bicycle have been constructed. These bicycle models are extensible to bicycle-rider models by including an upper and/or lower body. Within the rider models each joint can be actuated by a linear torsional spring-damper. BRiM integrates parametrization of models, which provides mappings between symbolic quantities used in equations and experimentally determined values, using the existing open-source BicycleParameters library. Additionally, SymMePlot, a visualization package for symbolically defined mechanical systems, has been developed and integrated within BRiM to visualize the created bicycle-rider models.
The effectiveness of BRiM is demonstrated through optimization and simulation tasks. Firstly, a real-time forward simulation of a torque-driven upper body bicycle-rider is performed. Secondly, an optimization problem is solved, involving the tracking of a rolling disc along a sinusoidal trajectory while minimizing the control torques. These demonstrations highlight the seamless integration of BRiM with other scientific tools and BRiM's potential for practical applications.
In conclusion, BRiM fills the gap in bicycle dynamics research by providing a modular and extensible framework for creating and sharing bicycle-rider models. Its systematic approach, unified framework, and integration capabilities enable efficient model development, research reproducibility, and further advancement in bicycle research.
The next step was brought by the emergence of the programmable switch architecture and P4, a language specifically designed for defining the behaviour of programmable network devices. P4 is a remarkably powerful language that allows the software developer to define almost any packet-processing functionality, all while abstracting away from the specifics of the target’s hardware architecture.
Despite its many benefits, P4 brings with it an additional layer of complexity for the network administrators, which may find themselves overwhelmed by having to learn a new programming language.
This report tackles this issue by presenting a prototype that is capable of synthesizing small P4 programs from pairs of input & output packets. Under the hood, the proposed solution uses a bottom-up enumerative synthesizer called Probe. This synthesizer was re-implemented, improved, and tailored to leverage the particularities of the problem domain.
from the evolutionary computation community and applying them to test case generation. The crossover between the evolutionary computation domain and the test case generation one produced DynaMOSA, a state-of-the-art evolutionary algorithm for generating test cases. In an attempt to
produce another well-performing algorithm, this paper performs another crossover between the two communities and creates DynaMOSARVEA - a product of a Reference Vector Guided Evolutionary Algorithm (RVEA) and DynaMOSA.
The conducted empirical study showed that although DynaMOSARVEA does not outperform DynaMOSA, it did outperform RVEA, thus demonstrating the value brought by domain-specific knowledge.","Fuzzing; Evolutionary Intelligence; Javascript; Software Testing; Bug Hunting; RVEA; DynaMOSA; MOSA","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:06133172-8fa5-4581-84b2-6f4f0430f77d","http://resolver.tudelft.nl/uuid:06133172-8fa5-4581-84b2-6f4f0430f77d","Using Newsletters to Analyze Curated Software Testing Content","de Munck, Philip (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","Zaidman, A.E. (mentor); Ardıç, B.A. (mentor); Langendoen, K.G. (graduation committee); Delft University of Technology (degree granting institution)","2023","As software and systems continue to get more complex, software testing is an important field to ensure that software functions properly. Every day information about software testing is being discussed on the internet via blog posts, discussion boards, and more. This information is scattered among many different websites, making it hard to access. To analyze software testing content published on the internet, newsletters curated by members of the field and reflective of industry trends were used. This analysis provides a broad overview of what software testing-related content is being discussed on the internet. Common problems discussed in newsletters include properly maintaining tests, working with and fixing flaky tests, and properly analyzing test results. Javascript and Typescript are the most popular programming languages discussed, while the web is also the most popular platform. When looking at test types, automated tests are frequently discussed, followed by end-to-end tests and unit tests. Common techniques and strategies discussed include API testing, the use of continuous integration, and the use of continuous deployment. Selenium, Cypress, and the Gherkin syntax are the most frequently discussed tools and technologies. Finally, opinionated articles tend to be most common, followed by articles that introduce a technology and articles that explain a concept.","Software Testing; Newsletter; Qualitative analysis; Atlas.ti","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:7abfe1f3-52bc-4e3f-9f3d-62897ef07425","http://resolver.tudelft.nl/uuid:7abfe1f3-52bc-4e3f-9f3d-62897ef07425","Performance of the Pareto Envelope-Based Search Algorithm - II in Automated Test Case Generation","Abhishek, Apoorva (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Stallenberg, D.M. (mentor); Verwer, S.E. (graduation committee); Delft University of Technology (degree granting institution)","2023","Software testing is an important yet time consuming task in the software development life cycle. Artificial Intelligence (AI) algorithms have been used to automate this task and have proven to be proficient at it. This research focuses on the automated testing of JavaScript programs, and builds upon the existing SynTest framework that is the current state of the art, with the Dynamic Many Objective Sorting Algorithm (DynaMOSA) being the best performing AI algorithm for test case generation. DynaMOSA uses the Non-Dominated Sorting Algorithm - II (NSGA-II) as its base algorithm, and adds modifications to it. This paper investigates whether the use of the Pareto Envelope Based Search Algorithm - II (PESA-II) as the base algorithm results in improved performance. The contributions of this research includes a modified PESA-II integrated into the SynTest framework, using inspiration from DynaMOSA. Moreover, we answer the question ""How does the modified PESA- II perform compared to DynaMOSA in generating test cases for JavaScript programs?"" The performance of the algorithms is measured based on the (branch and method) coverage of the test cases generated for a suite of JavaScript classes. The results show that the modified version of PESA-II outperforms the base version. However, neither manage to outperform DynaMOSA.","Software testing; Artifical Intelligence; Algorithms","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:aefff0b9-8723-4eff-a1b4-886c63a07ee9","http://resolver.tudelft.nl/uuid:aefff0b9-8723-4eff-a1b4-886c63a07ee9","Dissecting the secrets of software testing education in universities","Gökmen, Onur (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Ardıç, B.A. (mentor); Langendoen, K.G. (graduation committee); Delft University of Technology (degree granting institution)","2023","Software testing plays a crucial role in delivering reliable software. Currently, research is ongoing on how software developers and testers acquire this knowledge of software testing to deliver reliable software and what kind of knowledge is being transferred to developers and testers. In an effort to gain more insight into this area, we will focus on answering which software testing topics are being discussed in dedicated software testing courses and software engineering courses in top-ranked universities. Our findings show us that White-box testing, Black-box testing and the discussion of test levels are the most commonly discussed topics in universities.","Software testing; software testing curriculum; education; software engineering curriculum","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:2bbd8857-3dc0-4e52-8f6a-6518c557d287","http://resolver.tudelft.nl/uuid:2bbd8857-3dc0-4e52-8f6a-6518c557d287","Exploring Descriptive Metrics of Build Performance: A Study of GitHub Actions in Continuous Integration Projects","Constantinescu, Radu (TU Delft Electrical Engineering, Mathematics and Computer Science)","Proksch, S. (mentor); Huang, S. (mentor); Aivaloglou, E.A. (graduation committee); Delft University of Technology (degree granting institution)","2023","The Continuous Integration (CI) practice, has been rapidly growing and developing ever since it's introduction. This practice has been constantly providing benefits to developers such as early bug detection and feedback to development teams. In this study, we aim to identify the descriptive metrics that best illustrate the performance of the CI build stage, regarded as heart of the development process.
We conduct a small case study on repositories utilizing GitHub Actions, a CI tool that is relatively unexplored. Within this context, we classify projects using two performance indicators: build breakages and build durations. We examine two distinct sets of metrics in our analysis. The first set being build level metrics, which are closely linked to the build stage. The second set including project level metrics.
Our findings suggest that patterns traditionally associated with low breakages and durations are applicable to repositories employing GitHub Actions. However, understanding the relationship between project level metrics demands a more comprehensive approach, necessitating a thorough analysis of the project context for a holistic understanding of build performance.","Build Performance; Continuous Integration; GitHub Actions; Open source software","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:e1508b6c-91b4-4172-9850-0f8fce88bd4a","http://resolver.tudelft.nl/uuid:e1508b6c-91b4-4172-9850-0f8fce88bd4a","Topic Analysis on Popular Software Testing Books: Mining Software Testing Knowledge","Özmetin, Doruk (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Ardıç, B.A. (mentor); Langendoen, K.G. (graduation committee); Delft University of Technology (degree granting institution)","2023","In this study, we try to understand what kind of topics and frameworks are covered by the popular software testing books, and see whether these topics satisfy the industry needs and address the rising trends. To define ""popular"" software testing books, we formulated three heuristics. The topics of the books are analyzed through LDA topic modelling and manual inspection. LDA results inform us on the dominance of the topics within the whole corpus, while the manual inspection results show how often a topic is addressed. We combine the results of both of the methods to analyse the most noteworthy topics. We found that test automation, test design and planning, coverage analysis were the most frequently and extensively discussed topics in our corpus. We conclude that although the books cover some major topics that are demanded by the industry, there are also areas such as test management and usability testing, which are underrepresented. We also observed that the popular software testing books do not cover the rising software testing trends. While JUnit was the most discussed framework, in general the software testing books do not include practical information for specific frameworks or tools, but rather focus on the tool selection process.","Software testing; Data mining; Book discovery; book analysis; books; LDA; topic modelling; testing","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:14b43b01-514d-46a6-bf3f-0a4251b5817e","http://resolver.tudelft.nl/uuid:14b43b01-514d-46a6-bf3f-0a4251b5817e","Discovering the metrics for assessing a project’s maturity: An analysis of key indicators of maturity","Sartori, Kendra (TU Delft Electrical Engineering, Mathematics and Computer Science)","Proksch, S. (mentor); Huang, S. (mentor); Aivaloglou, E.A. (graduation committee); Delft University of Technology (degree granting institution)","2023","Continuous integration (CI) is a software engineering practice that promotes frequent code integration into a shared repository, improving the productivity within development teams as well as the quality of the software being developed. While CI adoption has gained traction, studies have examined its effective implementation and associated challenges. The idea that multiple contextual factors influence the adoption of CI prompts an exploration of suitable descriptive metrics for describing the CI practices employed. This paper aims to explore the metrics that best depict the level of maturity of a project, addressing the question: ""What metrics can be used to describe the maturity level of a project?"". With a lack of a comprehensive maturity framework, we leverage GitHub's API in an attempt to analyze various metrics to be used to create a framework for filtering projects.
Our findings indicate that project maturity cannot be captured by a single metric, but rather a combination of metrics reflecting different aspects throughout the project's lifecycle. Activity levels, including commits and pull requests, popularity indicators like stargazers, forks, and contributors, as well as repository size and age, emerge as primary indicators of maturity. By combining these metrics, a unified framework for categorizing mature projects can be established and further developed.","Continuous Integration; Project maturity; Open source software","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:d53f052e-4016-425e-adfe-73963c33b1d9","http://resolver.tudelft.nl/uuid:d53f052e-4016-425e-adfe-73963c33b1d9","Mining massive open online courses (MOOCs) for software testing knowledge","Džiugaitė, Neda (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Ardıç, B.A. (mentor); Langendoen, K.G. (graduation committee); Delft University of Technology (degree granting institution)","2023","Software testing is a necessary aspect of software development. With high expectations placed on software testers and a shortage of qualified professionals, Massive Open Online Courses (MOOCs) have emerged as a potential solution to improve software testing education. MOOCs provide accessible education and can offer a comprehensive review of software testing principles and procedures, bridging the gap between formal education and industry expectations. A study of software testing MOOCs was conducted to examine key aspects and compare concepts with university curricula and industry expectations. The findings show that a MOOC on average covers more concepts than a single university course. Additionally, MOOCs align well with what the industry expects from software testing practitioners. Therefore, MOOCs can successfully contribute to software testing education and bridge the gap between university curricula and industry expectations.","Massive Open Online Courses; Software testing; Education; Online Learning; MOOCs","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:f1c1087b-3661-46b6-912a-98ebb8ae2550","http://resolver.tudelft.nl/uuid:f1c1087b-3661-46b6-912a-98ebb8ae2550","Investigating the performance of SPEA-II on automatic test case generation","Li, Erwin (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Stallenberg, D.M. (mentor); Verwer, S.E. (graduation committee); Delft University of Technology (degree granting institution)","2023","Software testing is an important but time-consuming task, making automatic test case generation an appealing solution. The current state-of-the-art algorithm for test case generation is DynaMOSA, which is an improvement of NSGA-II that applies domain knowledge to make it more suitable for test case generation. Although these enhancements are applicable to other evolutionary algorithms,
no research has been done on how effective other algorithms can function as the base. In this paper, we apply the DynaMOSA modifications to SPEA-II to create a new algorithm, DynaSPEA-II. We conduct an empirical experiment where we evaluate the DynaMOSA enhancements, and directly compare DynaSPEA-II to
DynaMOSA. The algorithms are assessed on a benchmark consisting of 36 diverse JavaScript classes w.r.t. branch coverage. Our results show that adding DynaMOSA enhancements to SPEA-II results in higher coverage in 13.9% of classes, with an average increase of 4.92% for classes where a statistically significant difference was found. DynaSPEA-II performed equally to DynaMOSA, with no statistically significant difference being found between the two.","Search-Based Software Testing; Many Objective Optimisation; automatic testing","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:599990ab-c9af-41a8-a926-3bf9e4bd11f5","http://resolver.tudelft.nl/uuid:599990ab-c9af-41a8-a926-3bf9e4bd11f5","Exploring Code Coverage in Open-Source Development","Sterk, Alexander (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Wessel, Mairieli (mentor); Hai, R. (graduation committee); Hooten, E (graduation committee); Delft University of Technology (degree granting institution)","2023","Software development has increasingly become an activity that is (partially) done online on open-source platforms such as GitHub, and with it, so have the tools developers typically use. One such category of tools is that of code coverage tools. These tools track and report coverage data generated during CI tests. As the adoption of these tools has grown, so does the amount of available coverage data. In this thesis we explore a large database of coverage data from Codecov, a popular coverage tool. What sets our work apart from existing research is that it spans a large number of projects which vary in size, language, and domain. Furthermore, we conduct a survey, which was disseminated among a wide variety of open-source developers, instead of at a single company or in an enterprise setting. Our research consists of three parts. Firstly, we assess whether there is a relationship between the time to merge a PR and its coverage levels. We find that such a relationship does exist in certain projects. Secondly, we look at the impact of PR comments mentioning coverage on the odds of said coverage improving. Using the odds ratio test, we conclude that there are greater odds of coverage improving when it is mentioned than when it is not. Thirdly, we conduct a survey to ask developers their reasons for ignoring a failing status check related to code coverage. Some reasons they give are the complexity of testing, the triviality of the proposed changes, or the pull request being too important to wait for proper testing. Furthermore, respondents who identify as code contributors find themselves twice more likely to find fixing coverage a waste of their time than those who identify as code maintainers, while code maintainers are more concerned with not scaring away new contributors with strict coverage guidelines.","Code Coverage; Open-source software; Data Science; GitHub","en","master thesis","","","","","","https://doi.org/10.5281/zenodo.8044949 Source code and research data","","","","","","Computer Science","",""
"uuid:145c50ec-71fa-4d0d-9c07-139bb6710618","http://resolver.tudelft.nl/uuid:145c50ec-71fa-4d0d-9c07-139bb6710618","Is PSO a valid option for search-based test case generation in the context of dynamically-typed languages?","Viero, Diego (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Stallenberg, D.M. (mentor); Verwer, S.E. (graduation committee); Delft University of Technology (degree granting institution)","2023","In recent decades, automatic test generation has advanced significantly, providing developers with time-saving benefits and facilitating software debugging. While most research in this field focused on search-based test generation tools for statically-typed languages, only a few have been adapted for dynamically-typed languages. The larger search-space, generated by the dynamic allocation of types, causes standard search-based algorithms to not be as efficient in this domain and requiring a different approach. Existing algorithms like NSGA-II, MOSA and DynaMOSA have been employed to address this problem, but exploring different approaches may yield better results. That is why this paper proposes a different procedure based on an adaptation of the particle swarm optimization algorithm (PSO). The adaptation was evaluated using
the SynTest framework, showing that DynaMOSA achieves better results than the presented approach, both when comparing the PSO adaptation with and without DynaMOSA features, and when comparing the base DynaMOSA algorithm with PSO adapted to include DynaMOSA ingredients.","Evolutionary algorithm; Search-Based Software Testing; Dynamically Typed Languages","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:403355b9-2459-4653-8d20-0896c077db9a","http://resolver.tudelft.nl/uuid:403355b9-2459-4653-8d20-0896c077db9a","Mining software testing knowledge from stack overflow","Gupta, Dibyendu (TU Delft Electrical Engineering, Mathematics and Computer Science)","Ardıç, B.A. (mentor); Zaidman, A.E. (mentor); Langendoen, K.G. (graduation committee); Delft University of Technology (degree granting institution)","2023","This paper aims to unveil and gather testing-related information from Stack Overflow, highlighting it as a valuable resource for practitioners seeking answers and guidance.
The study aims to accumulate knowledge from real-life experiences shared on Stack Overflow and bridge the knowledge gap between industry practices and teaching practices.
The paper explores different types of software testing, popular frameworks, temporal trends of testing-related technologies, controversial opinions, and recommended practices/advice/suggestions from Stack Overflow posts. The methodology involves determining search terms through literature, querying the Stack Exchange API, conducting frequency analysis of words from posts, and manually inspecting threads. Our results show that the most popular frameworks discussed are Selenium, Spring, JMeter, and React. Automated testing and JavaScript frameworks have shown an upward trajectory over the years. The recommendations made by practitioners were categorized based on the broad scope of topics covered. We draw comparisons and parallels with related previous research and discuss the technical limitations faced during the study.
Overall, this paper uncovers valuable insights from Stack Overflow and provides practitioners with the current view on industry practices.","Stack Overflow; Software Testing; frequency analysis; framework; Automated Testing","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:f521eba3-7576-4e9c-a779-9932b1840a2f","http://resolver.tudelft.nl/uuid:f521eba3-7576-4e9c-a779-9932b1840a2f","UVC Seed Sterilization BSc Thesis: Software and Control","Ergül, Erman (TU Delft Electrical Engineering, Mathematics and Computer Science); van Weelderen, Erik (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Zeijl, H.W. (mentor); van Turnhout, J. (graduation committee); Wymenga, L.F.A. (mentor); Babaie, M. (graduation committee); Izadkhast, S. (graduation committee); Delft University of Technology (degree granting institution)","2023","In this thesis for the Bachelor’s Degree in Electrical Engineering, a Control Unit (PCB and software) is designed for inside the UVC Seed Disinfection Machine of Team UVO. The machine consists of four modules: the Control Unit, the LED Driver, the motor controller and the power supply. The Control Unit allows a user to input the intensity per wavelength (for possible wavelengths: 255 nm, 275 nm, 285 nm and 395 nm), the exposure time and the motor speed.
The design of the machine, including all the modules, is aimed at achieving the optimal wavelength for inactivation and maximum and uniform irradiance, with the ability of changing radiation settings according to the desire of the user.
The Control Unit manages communication with the other modules, data storage, the User Interface, safety checks and system enabling. The thesis covers the design choices regarding the entire design, with an in-depth analysis of the hardware implemented safety checks, the graphical user interface and the design of the communication protocol.
Due to difficulties regarding uploading the code onto the PCB, not every developed functionality could be tested or implemented. However, the functionalities that were tested, did perform as expected. In addition to this, after the thesis has been submitted, more time will be spent on debugging the PCB, implementing and testing its features.
The objective for the graduation project of Team UVO is to provide a proof-of-concept of disinfecting cabbage seeds (Brassica oleracea capitata) from Alternaria using UVC LEDs. This thesis describes design process of the Control Unit module. The results of the decontamination process are provided in
Appendix F.","Seed; Fungus; Software; Control; Disinfection; Sterilization; Inactivation; Ultraviolet; UVR; UV; UVC","en","bachelor thesis","","","","","","","","2023-10-01","","","","Electrical Engineering","EE3L11",""
"uuid:c845c3ec-148b-41bf-803b-781a82993848","http://resolver.tudelft.nl/uuid:c845c3ec-148b-41bf-803b-781a82993848","Simulation of the Dutch electricity system: A software expansion for the Illuminator","de Wolff, Evelien (TU Delft Electrical Engineering, Mathematics and Computer Science); van Zonneveld, Mees (TU Delft Electrical Engineering, Mathematics and Computer Science)","Cvetkovic, M. (mentor); Delft University of Technology (degree granting institution)","2023","The aim of this report is to discuss the design of the software that creates a simulation for the national electricity grid level of the Netherlands. This is done by further developing the open-source energy system integration development kit called the Illuminator. Where the goal of this software is to create an extra case, add a Graphical User Interface (GUI), and add a way to evaluate created configurations.","Sustainability; electricity grid; Energy transition; Education tool; Simulation; Software model","en","bachelor thesis","","","","","","","","","","","","Computer Science","",""
"uuid:40833173-cbe2-497f-8a44-6ea35e65e046","http://resolver.tudelft.nl/uuid:40833173-cbe2-497f-8a44-6ea35e65e046","Testing Distributed Database Isolation through Anti-Pattern Detection","Qiu, Jingxuan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Kulahcioglu Ozkan, Burcu (mentor); Dumbrava, Stefania (mentor); van Deursen, A. (mentor); Katsifodimos, A (graduation committee); Delft University of Technology (degree granting institution)","2023","Distributed databases often struggle to fulfill their transactional isolation guarantees due to sharding and replication. As a result, the problem of checking isolation levels is consistently receiving attention from academia and industries. Transactional dependency graphs form a useful abstraction to analyze the transactions’ dependencies and check for isolation anomalies using graph-based anti-patterns. Meanwhile, graph databases, known for their efficiency and convenience in graph representations and analytics, become promising for implementing isolation level checkers. In this work, we present a novel isolation level checker in the distributed graph database, ArangoDB. We collect execution histories from ArangoDB, operating in both single-machine and cluster modes. Also, we transform the execution histories to a dependency graph in another ArangoDB server. We then utilize customized AQL queries to detect anti-patterns on the graph. Our evaluation demonstrates the effectiveness and scalability of our checker, as well as its efficiency compared to existing isolation checkers. Also, we have found three underlying factors that are significantly correlated with the runtime of the checker: history length (the number of committed transactions), density (the density of the dependency graph), and contributing traversals (the number of traversals spent on cycles). The thesis artifact is online at https://github.com/jasonqiu98/GRAIL-artifact/tree/thesis.","Isolation Levels; Distributed Database Systems; Graph Databases; Graph Queries; Software Testing","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:748773c5-9651-49db-a694-7c157a8adc62","http://resolver.tudelft.nl/uuid:748773c5-9651-49db-a694-7c157a8adc62","Bug Detection in Distributed Systems with Platform-independent Fault Injection: A Case Study at Adyen","Dekker, Nick (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering)","Kulahcioglu Ozkan, Burcu (mentor); Zaidman, A.E. (mentor); Decouchant, Jérémie (graduation committee); Delft University of Technology (degree granting institution)","2023","Fault injection has been a long-standing technique for testing software. Injecting faults into a system, either in production or development environments, offers unique opportunities to discover bugs that are difficult to reproduce using conventional testing methods. However, it is widely considered to have a high implementation threshold. Due to this threshold and out of skepticism of its effectiveness, many developers are resistant to the idea of injecting faults as a testing method. This thesis introduces ``Yet Another Fault Injector"": YAFI, a platform-independent fault injection framework designed for distributed systems.
Our hope is that this framework is adopted to research future fault injection and causes the bar to apply fault injection on previously hard-to-test systems to be lowered.
We perform a case study to evaluate YAFI and find that with minimal implementation of fault injectors and little developer input, bugs and flaws can be detected in a system by running fault injection experiments.
A case study, performed at Adyen, shows that the system under test (SUT) is resilient in certain scenarios.
Automatically generated failure plans have been shown to result in system behavior without requiring in-depth knowledge of the SUT.
Injected faults were reflected in the response metrics when information from the experiments was used to generate additional failure plans. This emphasizes the need for gathering proper response data and system metrics to evaluate the system's behavior under different fault conditions.
Additionally, YAFI has been executed on a project based on Apache ZooKeeper, to show portability to other systems.
By introducing YAFI and showcasing its effectiveness through the case study, this thesis contributes to the advancement of fault injection techniques and encourages wider adoption of fault injection for testing distributed systems.","Software Testing; Fault Injection; Distributed Systems; Bug-detection","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:d50b9757-f7b4-419f-9a30-51b3cac44ce5","http://resolver.tudelft.nl/uuid:d50b9757-f7b4-419f-9a30-51b3cac44ce5","The adoption of design thinking and lean startup in an agile organisation: A case study of a global financial institution","Jiang, Charles (TU Delft Industrial Design Engineering)","Snelders, H.M.J.J. (mentor); Hultink, H.J. (graduation committee); Delft University of Technology (degree granting institution)","2023","Background: GFI was a global financial institution. An innovation methodology, DTLSA, was created for GFI through the integration of Design Thinking, Lean Startup, and Agile/Scrum software development. Signs of a low-level adoption for DTLSA were witnessed internally, despite ongoing promotion efforts.
Objective: This project aimed at improving the DTLSA adoption at GFI through empirical research and design intervention (directions). First, an investigation was needed to validate the potentially low-level adoption of DTLSA and to analyse different aspects of the DTLSA adoption status quo. Then, barriers and enablers for DTLSA adoption were sought out in pursuit of a better understanding of DTLSA adoption. Furthermore, the contextual information gathered in empirical research was used to inspire the design process for potential improvement concept directions.
Research methods: A mix of different research methods were used to achieve a holistic understanding of the topic: literature and existing knowledge study, group session, interview, and survey.
Research results: The low level of DTLSA adoption was confirmed, despite the positive aspects of DTLSA adoption such as sufficient understanding, and high motivation at GFI. A variety of barriers and enablers were found, the majority of which were related to 2 significant factors: team autonomy, and team engagement. 3 growth stages for DTLSA adoption were identified regarding the different levels of these 2 factors. Team autonomy at risk caused by multiple barriers was considered the most pressing issue that led to the low level of adoption.
Design: The design problem was defined as empowering the ambassador figures who emerged in teams during the DTLSA adoption. A storyboard of a possible current situation and an imaginative narrative of an ideal situation was created. 3 concept directions were proposed.","Combined approach; design thinking; Lean Startup; Agile Software Development; Organizational Adoption; Innovation transformation","en","master thesis","","","","","","","","2023-05-31","","","","Strategic Product Design","",""
"uuid:d5663b66-9456-4f9b-9c3f-0ca7e0e781db","http://resolver.tudelft.nl/uuid:d5663b66-9456-4f9b-9c3f-0ca7e0e781db","GitFL: Automated fault localization for environments where code-changes by multiple developers are tested simultaneously","van Dorth tot Medler, Jan (TU Delft Mechanical, Maritime and Materials Engineering)","de Winter, J.C.F. (mentor); Elsendoorn, K. (mentor); Eisma, Y.B. (graduation committee); Zaidman, A.E. (graduation committee); Delft University of Technology (degree granting institution)","2023","Background: For rigorous software testing, integration and end-to-end tests are essential to ensure the expected behavior of multiple interacting components of the system. When software is subjected to integration or end-to-end tests, it is often unfeasible to test every code change individually, as the runtime of these tests is usually significantly larger compared to unit tests. For this reason, batches of code changes from multiple authors are often tested simultaneously. Problem: An issue with testing multiple changes simultaneously is that it can be unclear which change form which author caused the failure when tests fail, as all changes from all authors included in the test can be at fault. Design: To solve this, a new automatic fault localization algorithm called GitFL is introduced, which combines state-of-the-art fault localization with version control history information for enhanced performance. GitFL was evaluated on a C++ repository at Adyen where tests are considered to be end-to-end. Findings: It showed that the addition of version control history information significantly increases the performance of fault localization for systems where multiple changes are tested simultaneously. Societal implications: This work provides insights on improved fault localization for these systems, which could enable organizations which develop these systems to speed up their testing and development processes. Originality: This work contributes by focusing on fault localization specifically for systems where multiple changes are tested simultaneously, which was not researched before.
Based on the results of a basic text search, we conclude that the majority of security-related activity is in reaction to known vulnerabilities and that maintainers are not always mentioning security terms when fixing exploits. We also confirm that many security-labeled issues are not pushed to vulnerability systems, even though the maintainers realize their security aspect. Then, while commit classification models can spot security-related commits automatically, the models struggle in realistic scenarios, and no particular feature or sampling method is vastly better than the others. Nonetheless, we evaluated the state-of-the-art models which spot security-related commits with an F1 score of 0.36.
Given the findings, we conclude that security-related activity is hard to automatically distinguish from everyday development activity and that manual review is required to spot these traces. Proposed methods can make this review easier. We suggest that more attention should be given to open source security to avoid early public traces of vulnerabilities.","Open source software; Machine Learning; Deep Learning; Commit representation; Source code embedding; Software security; Software vulnerability analysis; Vulnerabilities; Security advisories; Vulnerability Management","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:73764e6b-6529-4044-98e4-0d94e27b6e46","http://resolver.tudelft.nl/uuid:73764e6b-6529-4044-98e4-0d94e27b6e46","Crowd-sourced Collection and Analysis of Software Packages","Li, Bowu (TU Delft Electrical Engineering, Mathematics and Computer Science)","Proksch, S. (mentor); van Deursen, A. (graduation committee); Dumančić, S. (graduation committee); Delft University of Technology (degree granting institution)","2023","The escalating complexity of software systems in the digital age heavily relies on reusable code collections(packages) for their development and operation. Despite the numerous advantages of pre-existing libraries, managing dependencies can be intricate and time-consuming. This thesis focuses on enhancing package management tools through a decentralized, crowd-sourced approach to distribute the preprocessing load more effectively across the software development ecosystem. We propose a novel platform comprising a back-end server and a Maven plugin, fostering an efficient and collaborative environment for developers to share computational results. This platform not only alleviates server load but also allows for the storage and reuse of frequently used artifacts, thereby avoiding redundant computations and reducing production costs for users. This crowd-sourcing model empowers developers to seamlessly request and contribute analysis results, saving time and resources while benefiting the broader community.","Crowdsourcing; Software analysis; Maven; Call graph","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:94468343-104c-4964-87dd-8cb2dbe9e01f","http://resolver.tudelft.nl/uuid:94468343-104c-4964-87dd-8cb2dbe9e01f","Use Reinforcement Learning to Generate Testing Commands for Onboard Software of Small Satellites","Li, Zhuoheng (TU Delft Aerospace Engineering)","Menicucci, A. (mentor); Guo, J. (graduation committee); Panichella, A. (graduation committee); Delft University of Technology (degree granting institution)","2022","Programmers usually write test cases to test onboard software. However, this procedure is time-consuming and needs sufficient prior knowledge. As a result, small satellite developers may not be able to test the software thoroughly.
A promising direction to solve this problem is reinforcement learning (RL) based testing. It searches testing commands to maximise the return, which represents the testing goal. Testers need not specify prior knowledge besides the reward function and hyperparameters. Reinforcement learning has matured in software testing scenarios, such as GUI testing. However, migration from such scenarios to onboard software testing is still challenging because of different environments.
This work is the first research to apply reinforcement learning in real onboard software testing and one of few studies that perform RL-based testing on embedded software without a GUI. In this work, the RL agent observes current code coverage and the interaction history, selects a pre-defined command, or organises a command from pre-defined parameters to maximise cumulative reward. The reward function can be code coverage (coverage testing) or estimated CPU load (stress testing). Three RL algorithms, including the tabular Q-Learning, Double Duelling Deep Q Network (D3QN), and Proximal Policy Optimization (PPO), are compared with a random testing baseline and a genetic algorithm baseline in the experiments.
This study also performs regression testing with a trained RL agent, i.e., to test a version of onboard software that it has never seen before. To do that, the agent processes graph input with code coverage information. The graph is extracted from the onboard software source code via static code analysis. The work tries two graph neural network architectures (GGNN and GAT) with several graph pooling mechanisms to process the graph input.
Apart from the test command generation algorithms, some middleware is also implemented, including a command/response parser, a state identification module, a branch coverage collection tool, and a tool to extract the graph representation and node features. During onboard software testing, the onboard computer (OBC) or the electrical group support equipment (EGSE) can be the master of the bus. The command generation algorithms can run on a lab PC or a cloud server.
The research reveals the advantages and drawbacks of using reinforcement learning to test onboard software. On the one hand, RL-based testing performs well in non-deterministic environments (e.g., stress testing) and regression testing. On the other hand, more straightforward methods like random testing and the genetic algorithm are more useful in deterministic environments.
This document also introduces relative background knowledge. It leaves many recommendations for future work, such as improving sampling efficiency, generalization, and learning a model for fault detection in satellite operation.
In this thesis, we propose a novel unsupervised probabilistic type inference approach to infer data types in a test case generation context. The approach uses both static and dynamic type inference techniques. We implemented the approach in a novel tool called SYNTEST-JAVASCRIPT which is an extension of the SYNTEST-FRAMEWORK. We evaluate the performance of the approach compared to random type sampling with respect to branch coverage. The evaluation is done using a custom benchmark of 97 units under test.
Our results show that using statically inferred type achieves a statistically significant increase in 54% of the benchmark files compared to the baseline. The combination of using both statically and dynamically inferred types improves the approach slightly with a significant increase in 56% of the benchmark files compared to the baseline. Finally, the results show that the time consumed by the static and dynamic type inference is insignificant compared to the total time budget and is worthwhile given the performance boost type inference provides.","Type Inference; Search-based Software Engineering; Dynamically Typed Languages; Test Case Generation; JavaScript; Emperical Software Engineering","en","master thesis","","","","","","","","","","","","Computer Science | Artificial Intelligence","",""
"uuid:6ba65607-8f33-41b7-8a89-700fe3df564d","http://resolver.tudelft.nl/uuid:6ba65607-8f33-41b7-8a89-700fe3df564d","Multi-Point Fluid-Dynamic Shape Optimization for Turbomachinery","Wilkens, Laurent (TU Delft Aerospace Engineering)","Pini, M. (mentor); Anand, N. (graduation committee); Delft University of Technology (degree granting institution)","2022","This work sets out the creation, verification and demonstration of an automated end-to-end tool for the multi-point fluid-dynamic optimization of turbomachinery. Taking into account multiple operating conditions during optimization aims to benefit the overall performance of turbomachines which are characterized by off-design operation and to produce more robust designs with respect to deviations in operating conditions. The feasibility and effectiveness of the tool is demonstrated by a two-point fluid-dynamic optimization of a two-dimensional Aachen turbine stator cascade, performed on the High-Performance Computing cluster of the TU Delft. By utilizing the stochastic NSGA2 optimization algorithm, the numerically noisy problem is successfully optimized. The entropy generation of the blade is reduced at both nominal and off-design operating condition by an average of 7.57%, while satisfying a constraint on the flow turning when passing through the stator cascade. Additionally, a numerical noise analysis shows that disregarding the most noisy design variables increases the probability of obtaining an improved design.","Turbomachinery; Multi Point Optimisation; Fluid-Dynamic; CFD; CAD; Turbine; Numerical Optimisation; Surrogate Modeling; mesh deformation; Global Optimisation; Gradient-based Optimisation; Open source software","en","master thesis","","","","","","","","","","","","Aerospace Engineering","","51.990205, 4.375880"
"uuid:1d6149f2-353a-4e8a-8f21-7874346cac91","http://resolver.tudelft.nl/uuid:1d6149f2-353a-4e8a-8f21-7874346cac91","Guided Metamorphic Transformations for Testing the Robustness of Trained Code2Vec Models","Marang, Ruben (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Applis, L.H. (mentor); van Deursen, A. (graduation committee); Erkin, Z. (graduation committee); Delft University of Technology (degree granting institution)","2022","Machine learning models are increasingly being used within software engineering for their predictions. Research shows that these models’ performance is increasing with new research. This thesis focuses on models for method name prediction, for which the goal is to have a model that can accurately predict method names. With this thesis, we could create a tool that can suggest method names to software developers, which would assist in improving the quality of the projects.
This research aims to get insight into the robustness vulnerabilities of a method name prediction model. We use a genetic search algorithm that looks for these robustness problems. The main question this thesis tries to answer is to what extent the performance metrics are affected by applying metamorphic transformations to the test set of a trained code2vec model. Besides this, this thesis also proposes an alternative metric called percentage MRR, which might better reflect the robustness of a model. The main idea behind this metric is that it penalizes the prediction certainty of a model instead of penalizing the prediction rank.
To answer this research question, a tool is created that runs a genetic algorithm applying these metamorphic transformations to a dataset that a trained model is then evaluating. With this tool, we conducted 22 genetic search experiments on primary metrics and combinations of metrics to see the trade-offs in the Pareto fronts. The guided search of applying metamorphic transformations on the test set results in an average performance decrease of around 19%. This thesis also compares this drop in performance to the performance decrease a random search algorithm would create. Notably, for every transformer added, the average decrease in performance becomes smaller, and there are transformations, e.g., the if-false-else transformation, that have a bigger effect than others. This thesis concludes that the trained model is not robust against metamorphic transformations and has a significant performance drop.","Genetic algorithm; Machine learning; Metamorphic testing; Metamoprhic transformations; Neural network; Genetic search; Robustness; Evaluation framework; software engineering","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:a3c92ce8-710c-4623-a0e1-5323e20c1c3a","http://resolver.tudelft.nl/uuid:a3c92ce8-710c-4623-a0e1-5323e20c1c3a","Predicting Delays in Software Deliveries using Networked Classification at ING","Moelchand, Pravesh (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Deursen, A. (graduation committee); Kula, E. (mentor); Delft University of Technology (degree granting institution)","2022","Delays in the delivery of software projects and the corresponding cost and schedule overruns have been common problems in the software industry for years. A challenge within software project management is to make accurate effort estimations during planning. Software projects are complex networks, with multiple dependencies between software tasks.
This study aims to combine the field of effort estimation and networked classification to utilise network information for delay prediction in industry. We conducted a case study at ING, resulting in a number of insights with regards to networked classification in an industry setting.
There is a difference in the organisational structure of open-source and industry projects. This constitutes to a difference in available information, but there is also an opportunity to leverage the organisational structure of ING to improve delay prediction performance.
Using weights in networked classification has shown no improvement compared to not using them, but relational models do benefit from larger datasets as the used network contains more relational information.
Based on the insights we recommend ING to: keep track of more information, improve data quality by educating their teams and create models for specific domains or teams to leverage their organisational structure.","Effort Estimation; Networked Classification; Software Engineering Management; Delay Prediction","en","master thesis","","","","","","","","","","","","Computer Science | Data Science and Technology","",""
"uuid:6d71f8a8-9941-4d57-bedb-8b3fb8c841e9","http://resolver.tudelft.nl/uuid:6d71f8a8-9941-4d57-bedb-8b3fb8c841e9","Exploring the practice of organisational Security Patch Management from a socio-technical perspective: Using a Mixed Methods Approach to investigate IT-practitioners’ decision-making and patch activity","van Engelen, Yves (TU Delft Technology, Policy and Management)","Parkin, S.E. (mentor); van Eeten, M.J.G. (graduation committee); Janssen, M.F.W.H.A. (mentor); Delft University of Technology (degree granting institution)","2022","In the current digitalised society keeping assets secure is one of the most prominent challenges organisations face. In the ongoing arms race between attackers and defenders, software security patching is a well-recognised and effective strategy to mitigate vulnerabilities in software products. However, organisations struggle with the best practice to “patch early and often”, resulting in vulnerabilities in software being exposed for much longer than desired. Prior research indicates the socio-technical nature of this practice forms the core of delays in software patch management. Developing a deeper understanding of the decision-making of IT practitioners and what socio-technical factors play a role in this process allows organisations to address the ineffectiveness of their security patch process. The main research question in this explorative research is: What socio-technical factors influence the effectiveness and timeliness of the security patching process in organisations? This Mixed Methods research combines qualitative data from interviews with IT practitioners, with a quantitative data exploration of the meaningfulness of organisational measurements. Findings show that IT practitioners go through a funnel of decision-making that influences the decision of what to patch, and when to patch. The presence and interplay of different socio-technical factors related to four main aspects of this decision (i.e., security, applicability, operability, and availability) result in tensions and trade-offs influencing the decision space of IT. Furthermore, this study indicates the interrelations between the significance of socio-technical factors, which is reduced by certain coping strategies applied by IT practitioners. This research reveals that having some measurement in place helps to understand the existence of challenges and the working of coping strategies, therefore contributing to an understanding of socio-technical challenges. However, it also reveals several limitations to the quality of existing data and difficulties in coming to measurements that provide meaningful information, due to socio-technical factors. The main contribution of this research is a better understanding of how socio-technical factors influence the decision-making process of IT practitioners. This research is limited in the way it uses quantitative data to understand patching activity. Future research is recommended to compare the potential discrepancy between what IT practitioners state influences the effectiveness of their security patch process and what the actual patching activity of IT practitioners reveals about the effectiveness of patching. This research furthermore hypothesises that not all socio-technical factors have the same level of significance. It is recommended to investigate the possibilities of quantification of the importance of each of the socio-technical challenges identified in this explorative study.","Software Security Patching; Information Security; Risk Management; Mixed Methods Approach; Socio-technical challenges","en","master thesis","","","","","","","","","","","","Complex Systems Engineering and Management (CoSEM)","",""
"uuid:50562fb5-f04c-472c-b935-c3928765f24d","http://resolver.tudelft.nl/uuid:50562fb5-f04c-472c-b935-c3928765f24d","Utilizing Lingual Structures to Enhance Transformer Performance in Source Code Completions","Katzy, Jonathan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Aniche, Maurício (mentor); Mir, S.A.M. (graduation committee); Delft University of Technology (degree granting institution)","2022","We explored the effect of augmenting a standard language model’s architecture (BERT) with a structural component based on the Abstract Syntax Trees (ASTs) of the source code. We created a universal abstract syntax tree structure that can be applied to multiple languages to enable the model to work in a multilingual setting. We adapted the general graph transformer architecture to function as the structural component of the transformer. Furthermore, we extended the Embeddings from Language Models (ELMo) style embeddings to work in a multilingual setting when working with incomplete source code. The final results showed that the multilingual setting was beneficial to achieving higher quality embeddings for the embedding model, however, monolingual models performed better in most cases for the transformer model. The addition of ASTs resulted in increased performance in the best performing models on all languages, while also reducing the need for a pre-training task to achieve the best performance. The largest increase in performance for a Java model compared to its baseline counterpart was 3.0% on average on the test set, the largest increase in performance for a Julia model compared to its baseline counterpart was 1.1% on average on the test set, and the largest increase in performance of a CPP model compared to its baseline counterpart was 5.7% on average on the test set.","Machine learning; Machine Learning for Software Engineering; Transformer; Language modeling; Code Completion","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:4ce70a5f-777a-4b9a-9b37-1fe39ea1f98f","http://resolver.tudelft.nl/uuid:4ce70a5f-777a-4b9a-9b37-1fe39ea1f98f","Blockchains and Security: Grammar-Based Evolutionary Fuzzing for JSON-RPC APIs and the Division of Responsibilities","Veldkamp, Lisette (TU Delft Applied Sciences; TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering; TU Delft Science Education and Communication)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Kalmar, E. (mentor); Wehrmann, C. (graduation committee); Verwer, S.E. (graduation committee); Bosman, P.A.N. (graduation committee); Delft University of Technology (degree granting institution)","2022","The continual increase in cyber crime revolving blockchain applications calls for secure blockchain systems and clarity on the division of security responsibilities. This research is an integrated project between two master programmes at the Delft University of Technology: Computer Science and Communication Design for Innovation, and focuses on software testing and security responsibilities.
In this study, we investigate if grammar-based fuzzing, a popular approach for identifying bugs in software, is effective on JSON-RPC systems like blockchain applications Ripple and Ethereum. Furthermore, we evaluate whether we can improve upon traditional grammar-based fuzzing by using evolutionary search.
We introduce GEFRA, a black-box grammar-based fuzzing tool that generates tests for JSON-RPC APIs.
Using a diversity-based fitness function that leverages system feedback, GEFRA is able to effectively guide the search process towards new test cases that obtain additional test coverage.
Additionally, various perspectives on blockchain security responsibilities are investigated. A media content analysis was performed and interviews were conducted with legal and blockchain experts.
News media frequently frame end users as responsible for the prevention of blockchain attacks. While attackers are legally responsible, users are left to deal with the consequences if attackers cannot be found. Responsibilities generally end up with users as decentralisation is the core idea of blockchain. Legislation may be the only solution to define a clear division of responsibilities.","Blockchain; Software Testing; Fuzzing; Test Generation; JSON-RPC API; Evolutionary Algorithm; Responsibility Division; Framing; Media Content Analysis","en","master thesis","","","","","","Double degree in Computer Science | Cyber Security and Applied Sciences | Communication Design for Innovation","","","","","","Computer Science | Cyber Security","",""
"uuid:5ac105ac-f2d0-4891-8b20-f5caae141854","http://resolver.tudelft.nl/uuid:5ac105ac-f2d0-4891-8b20-f5caae141854","DiscoTest: Evolutionary Distributed Concurrency Testing of Blockchain Consensus Algorithms","van Meerten, Martijn (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Özkan, B. (mentor); van Deursen, A. (graduation committee); Decouchant, Jérémie (graduation committee); Delft University of Technology (degree granting institution)","2022","Distributed concurrency bugs (DC bugs) are bugs that are triggered by a specific order of events in distributed systems. Traditional model checkers systematically or randomly test interleavings but suffer from the state-space explosion in long executions. This thesis presents DiscoTest, a testing tool for DC bugs in blockchain consensus algorithms. The tool guides the search for schedules that trigger DC bugs by an evolutionary algorithm (EA). We apply the tool to Ripple's consensus algorithm (RCA) and design and evaluate two representations and fitness functions.
We evaluate the representations on locality, redundancy, and scaling, by using graph edit distance (GED) to calculate the distance between schedules. We find that delay scheduling and priority scheduling are representations that allow variation operators of an EA to modify schedules. To evaluate the performance of the representations and fitness functions, we create a custom bug benchmark for RCA. An empirical comparison on the benchmark shows that delay scheduling with time fitness results in a significantly higher success rate than random search on one bug. Finally, we discover an in-production liveness bug in RCA.","Distributed Systems; Software Testing; Distributed Concurrency; Ripple; Search-Based Software Testing; Evolutionary Algorithms","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:f2cf3430-328c-4489-9f79-ce1d739eae47","http://resolver.tudelft.nl/uuid:f2cf3430-328c-4489-9f79-ce1d739eae47","A Toolchain for Streaming Dataflow Accelerator Designs for Big Data Analytics: Defining an IR for Composable Typed Streaming Dataflow Designs","Reukers, Matthijs (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Quantum & Computer Engineering)","Hofstee, H.P. (mentor); Al-Ars, Z. (graduation committee); Peltenburg, J.W. (graduation committee); van Leuken, T.G.R.M. (graduation committee); Delft University of Technology (degree granting institution)","2022","Tydi is an open specification for streaming dataflow designs in digital circuits, allowing designers to express how composite and variable-length data structures are transferred over streams using clear, data-centric types. This provides a higher-level method for defining interfaces between components as opposed to existing bit- and byte-based interface specifications.
In this thesis, an open-source intermediate representation (IR) is introduced which allows for the declaration of Tydi's types. The IR enables creating and connecting components with Tydi Streams as interfaces, called Streamlets. It also lets backends for synthesis and simulation retain high-level information, such as documentation. Types and Streamlets can be easily reused between multiple projects, and Tydi’s streams and type hierarchy can be used to define interface contracts, which aid collaboration when designing a larger system.
The IR codifies the rules and properties established in the Tydi specification and serves to complement computation-oriented hardware design tools with a data-centric view on interfaces. To support different backends and targets, the IR is focused on expressing interfaces, and complements behavior described by hardware description languages and other IRs. Additionally, a testing syntax for the verification of inputs and outputs against abstract streams of data, and for substituting interdependent components, is presented which allows for the specification of behavior.
To demonstrate this IR, a grammar, parser, and query system have been created, and paired with a backend targeting VHDL.","Hardware description languages and compilation; Design reuse and communication-based design; Transaction-level verification; Intermediate representations; Open-source software; Hardware streams","en","master thesis","","","","","","","","2022-11-03","","","","Computer Engineering","",""
"uuid:bd2b288e-08f9-4200-92c8-ee48bfff9408","http://resolver.tudelft.nl/uuid:bd2b288e-08f9-4200-92c8-ee48bfff9408","Automated Detection of Code Smells for Machine Learning Applications","Zhang, Haiyin (TU Delft Electrical Engineering, Mathematics and Computer Science)","Cruz, Luis (mentor); van Deursen, A. (mentor); Yang, J. (graduation committee); Delft University of Technology (degree granting institution)","2022","The popularity of machine learning has wildly expanded in recent years. Machine learning techniques have been heatedly studied in academia and applied in the industry to create business value. However, there is a lack of guidelines for code quality in machine learning applications. Although machine learning code is usually integrated as a small part of an overarching system, it usually plays an important role in its core functionality. Hence ensuring code quality is quintessential to avoiding issues in the long run. To help improve the machine learning code quality, we conducted two studies in this thesis. The first study proposes and identifies a list of 22 machine learning-specific code smells collected from various sources, including papers, grey literature, GitHub commits, and Stack Overflow posts. We pinpoint each smell with a description of its context, potential issues in the long run, and proposed solutions. In addition, we link them to their respective pipeline stage and the evidence from both academic and grey literature. The second study aims to develop a tool to improve code quality and study the prevalence of machine learning-specific code smells. We extend a static analysis tool dslinter and run it on both Python notebook datasets and regular Python project datasets. Moreover, we analyse the result to check the tool's validity and investigate the code smell prevalence in machine learning applications. The code smell catalog and dslinter together help data scientists and developers produce and maintain high-quality machine learning application code.","Software Quality; Static Code Analysis; Machine Learning; Technical Debt","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:690fa42f-7640-452b-a0cb-9c4d135e5902","http://resolver.tudelft.nl/uuid:690fa42f-7640-452b-a0cb-9c4d135e5902","Agile software development and IT-architecture interactions in the public sector: A multi-case study approach to identify whether these roles are complementary or counterproductive","van der Vliet, Stan (TU Delft Technology, Policy and Management)","Janssen, M.F.W.H.A. (mentor); van der Voort, H.G. (graduation committee); Heijnen, W.G.P. (graduation committee); Delft University of Technology (degree granting institution)","2022","Large software projects often overrun costs, development time and do not deliver what has been envisioned by the customer. An important factor contributing to these cost overruns is the mismatch in the approaches of IT-architects and agile software development roles. In literature there are hints that complementary added value could be achieved from interaction of these roles. For practitioners there is need for governance strategies that improve the added value of combining these roles and their approaches in a complementary way. To identify how governance strategies help to obtain complementary added value from IT-architecture and agile software developer interactions this research used a multi-case study approach with exploratory and theory building focus. Interviews with IT-architecture and agile developer roles were used to collect data and compose case studies. This approach allowed to identify governance strategies with governance strategies that were used in practice. Multiple cases were chosen to analyse governance strategies across cases, which improved generalisability. The results include a topology of interaction models, a set of added value and problems found in the cases, related to their interaction model and a set of descriptions of governance strategies that could be used to achieve complementary added value from the interaction of software architects and agile development teams.The case studies and results provide practitioners with the opportunity to update their knowledge and change their perspective on interaction of IT-architecture and agile development roles. Future research could expand the results in terms of breadth and depth of organisations.","IT-architecture; Agile Software Development; Governance; Agile; Enterprise architecture; Software architecture; Solution architecture; Product owner; public sector; Government; Complementarity; Added Value; Tensions; Bottlenecks; problems; Case study; Multi-case study; Exploratory; Theory building; Explanatory; Governance strategies; Strategy; IT; BPM; Agility; Software development; Complex systems; ICT; ICT Architecture","en","master thesis","","","","","","","","","","","","Complex Systems Engineering and Management (CoSEM)","",""
"uuid:a3de8838-8e9a-4c70-8a04-a9f296be7c84","http://resolver.tudelft.nl/uuid:a3de8838-8e9a-4c70-8a04-a9f296be7c84","Green AI: An empirical study","Yarally, Tim (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Deursen, A. (mentor); Cruz, Luis (mentor); Weinmann, M. (mentor); Feitosa, Daniel (mentor); Delft University of Technology (degree granting institution)","2022","In this work, we look at the intersection of Sustainable Software Engineering and AI engineering known as Green AI. AI computing is rapidly becoming more expensive, calling for a change in design philosophy. We consider both training and inference of neural networks used for image vision; to reveal energy-efficient practices in an exploratory fashion.
First of all, we examine a modern algorithm for hyperparameter optimisation and compare this to two baseline methods. We find that the baseline algorithms perform considerably worse despite their wide usage and argue that they should not be used when training large models. Furthermore, we look at the layer structure of convolutional networks and conclude that the convolutional layers have the largest influence on the total consumption. We report increases of up to 95% with only marginal improvements in accuracy. Therefore we recommend developers to reduce their network architectures as long as the performance stays within a reasonable margin.
Second, we present a study focused on the inference phase of the deep learning pipeline. We look at the effect of batching for image classification requests. To facilitate the data collection, we make use of a simulated queue and the Pytorch framework. We find that batching has a significant impact on the energy consumption, but the magnitude of this impact can vary a lot for different models. Our recommendation is to treat the batch size as an inference parameter that needs to be tuned first. Additionally, we highlight how the energy consumption of image vision networks has evolved over the past decade. Presenting the findings together with the performance of these networks shows a steady, upward energy trend accompanied by a decreasing slope for the accuracy. The only exception is the model ShuffleNetV2. We mention the design principles that went into the development of this network and present it as a start for future research.","Green AI; Software Engineering; Hyperparameter Optimization; Image Classification","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:bf321713-06a4-4a6b-834e-a0aa15a0bf64","http://resolver.tudelft.nl/uuid:bf321713-06a4-4a6b-834e-a0aa15a0bf64","Practical Verification of Lenses: Implementing Formally Verified Lenses using agda2hs","Massar, Marnix (TU Delft Electrical Engineering, Mathematics and Computer Science)","Cockx, J.G.H. (mentor); Escot, L.F.B. (mentor); Wang, Q. (graduation committee); Delft University of Technology (degree granting institution)","2022","agda2hs is a tool which translates a subset of Agda to readable Haskell. Using agda2hs, programmers can implement libraries in this subset of Agda, formally verify them, and then convert them to Haskell. In this paper we present a new, verified implementation of the lens data type, which is used to access data structures in a readable yet functionally pure way. We show successfully verified lenses for record types and tuples, and also present a lens operating on lists that could not be translated properly. We discuss the obstacles encountered during development, and offer thoughts on possible improvements to agda2hs.","Agda; software verification; agda2hs; Haskell; functional programming","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:04c0c3e2-df6c-40f4-a337-de71f0d72537","http://resolver.tudelft.nl/uuid:04c0c3e2-df6c-40f4-a337-de71f0d72537","Study the impact of topology-related attacks in Software Defined Network","Ivaşcu, Darius (TU Delft Electrical Engineering, Mathematics and Computer Science)","Lal, C. (mentor); Conti, M. (mentor); Urbano, Julián (graduation committee); Delft University of Technology (degree granting institution)","2022","The Software Defined Network (SDN) is a relatively new paradigm that aims to tackle the lack of centralization in the existing network by separating the control centre from the programming data plane. The controller keeps an overview of the structure of the whole network, which makes it vulnerable to possible topology poisoning attacks. Topology attacks aim to disrupt the overview of the controller over the structure of the network in order to intercept or disrupt the transfer of the packages over the SDN network. In this paper, a survey on the state-of-the-art on topology attacks is conducted, followed by an analysis of the limitations of the existing solutions, and a comparison between the verification process of each solution and the number of known vulnerabilities are presented. Further, possible future research directions are proposed for improving these solutions and fixing the mentioned limitations and vulnerabilities.","SDN; Topology; Security; Software Defined Network","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:61d45fb3-ac3c-447c-a05d-1f7499725a3b","http://resolver.tudelft.nl/uuid:61d45fb3-ac3c-447c-a05d-1f7499725a3b","Developer-Centric Test Amplification: User-Guided Test Amplification","WANG, DANYAO (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Brandt, C.E. (graduation committee); Gadiraju, Ujwal (graduation committee); Delft University of Technology (degree granting institution)","2022","Automated test generation techniques improve the efficiency of software testing. However, the opacity of the test generation process and concerns about the readability of generated tests make it difficult for software developers to accept them. Developer-centric test amplification creates easy-to-understand test cases by amplifying existing test cases that developers are familiar with and assists developers in integrating them into their test suite. We propose user-guided test amplification to allow developers to guide the test amplification to generate new test cases based on their branch coverage expectations. We create a user-guided test amplification prototype that starts with the method developers want to test, aids developers in communicating which branch should be covered, and assists developers in inspecting and selecting the amplified test cases. We conduct a technical case study with two Java projects and show that our approach cannot always produce a test case to cover a given branch because objects are not initialized with the right parameter values to fulfill the target branch condition. We also perform a user study with 12 software developers to investigate developers' opinions on our approach. The evaluation result shows that the user-guided test amplification generates amplified test cases that developers are satisfied with and is especially useful when developers want to generate tests to cover a specific branch. Connecting the developers' coverage goal and the amplified test cases enables developers to understand and select the test cases more easily.","User-Guided Test Amplification; Developer-Centric Test Amplification; Software Testing; Test Generation","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:e7472523-0652-44ca-9745-8337970b048b","http://resolver.tudelft.nl/uuid:e7472523-0652-44ca-9745-8337970b048b","Controller-Related Security Risks and Vulnerabilities in Software-Defined Networking","Plas, Nicolas (TU Delft Electrical Engineering, Mathematics and Computer Science)","Lal, C. (mentor); Conti, M. (mentor); Delft University of Technology (degree granting institution)","2022","Software-Defined Networking (SDN) is a relatively new networking paradigm that proposes to separate the control and the data logic in networks. The control logic is centralized in a controller, which allows for a programmable network. SDN is promising but also intro- duces some critical security vulnerabilities to networks. This work proposes a survey of state-of-the-art research into attacks and state-of-the-art defences arising from controller place- ment, controller failure and the northbound interface. Furthermore, it proposes a comparison and analysis of the limitations of that research. Finally, it proposes future research directions to improve SDN security focused on network con- sistency and on the interoperability of different defences.","Software Defined Networking; SDN; Cybersecurity; Controller; Controller Placement; Controler Failure; Northbound Interface; NBI","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:228d42eb-54ab-4775-acd6-b4d0a41e6e76","http://resolver.tudelft.nl/uuid:228d42eb-54ab-4775-acd6-b4d0a41e6e76","Analysing the Impact of Inline Comments for the Task of Code Captioning","Bacevičius, Vidas (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Applis, L.H. (mentor); Gerritsen, B.H.M. (graduation committee); Delft University of Technology (degree granting institution)","2022","AI-assisted development tools use Machine Learning models to help developers achieve tasks such as Method Name Generation, Code Captioning, Smart Bug Finding and others. A common practice among data scientists training these models is to omit inline code comments from training data. We hypothesize that including inline comments in the training code will provide more information to the model and improve the model's performance for natural-language related tasks, specifically Code Captioning. We adjust one of these models, code2seq, to include inline comments in its data processing, then train and compare it to a commentless version. We find that including inline comments tends to increase the performance of the model by making it faster and producing more verbose results, and then reflect on the results of this work to formulate suggestions on how to improve upon this body of research.","Software Development; comments; Machine Learning","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:ca6412ec-b2dd-4e1e-8588-a479d840dabb","http://resolver.tudelft.nl/uuid:ca6412ec-b2dd-4e1e-8588-a479d840dabb","Finding most used software application by using a time-dependency graph","Dumitru, Alexandru (TU Delft Electrical Engineering, Mathematics and Computer Science)","Gousios, Giorgos (mentor); Spinellis, D. (mentor); Anand, A. (mentor); Delft University of Technology (degree granting institution)","2022","Using open-source packages when developing software applications is the general practice among a vast amount of software developers. However, importing open-source code which may depend on other existing technologies may lead to the appearance of a transitive dependency chain. As a result, failure of packages with a high amount of transitive dependants may have an impact on the performance of all dependant applications. This work focuses on designing a graph data structure which maps the dependency relationship between packages as edges, with nodes representing a single version of a certain package. Moreover, the data structure may perform queries based on time intervals, being able to resolve versions in the same manner as a package distributor would. By constructing such data structure, an analysis of the most critical packages for an ecosystem can be conducted. This paper looks mainly into the Debian ecosystem, and searches for applications which are most critical. Based on this paper criticality evaluation, the package dpkg was found to be both most critical and most used in whole Debian's package main repository.","Depedency Network; Debian; Software applications; Graph","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:02720548-effa-4707-af99-718667c91ae3","http://resolver.tudelft.nl/uuid:02720548-effa-4707-af99-718667c91ae3","Training a Machine-Learning Model for Optimal Fitness Function Selection with the Aim of Finding Bugs","Dimitrov, Stoyan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Derakhshanfar, P. (mentor); Höllt, T. (graduation committee); Delft University of Technology (degree granting institution)","2022","To ensure that a software system operates in the correct way, it is crucial to test it extensively. Manual software testing is severely time-consuming, and developers often underestimate its importance. Consequently, many tools for automatic test generation have been developed during the past decade. EvoSuite is a state-of-the-art tool for automatic generation of unit tests. It can produce test suites based on chosen coverage criteria, also known as a fitness function. Previous studies have widely assessed the performance of the different fitness functions available in EvoSuite. However, the combination of various coverage criteria has not been considered. In this paper, we assess the effectiveness of the combination of Branch coverage and Output diversity fitness functions. We compare it to two of the most popular fitness functions in EvoSuite - Branch coverage and the Default configuration (combines eight coverage criteria) to estimate its performance. We developed a machine learning tool that determines which fitness function will achieve better results based on class characteristics. The assessment criteria we consider are branch coverage and fault detection, represented by mutation score. We further examined how the time limit affects the performance of the considered fitness functions. The results have shown that the combination of Branch coverage and Output diversity outperforms the Default configuration significantly in branch coverage but has worse performance in fault detection capabilities. We have also found that the Branch and Output diversity coverage criteria achieve better results when compared with only using Branch coverage in terms of mutation score. Additionally, the static software metrics, especially CBO, LCOM* and LOC, are highly correlated with the performance of the fitness functions and can predict which coverage criteria will achieve better results.","Testing; Software testing; EvoSuite; Code Metrics","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:7031cf6f-89a8-41f3-9ee9-f6da66a23279","http://resolver.tudelft.nl/uuid:7031cf6f-89a8-41f3-9ee9-f6da66a23279","Exploitation of P4 Programmable Switch Networks","Frensel, Mees (TU Delft Electrical Engineering, Mathematics and Computer Science)","Kuipers, F.A. (mentor); Ji, C. (mentor); Molenaar, M.L. (graduation committee); Delft University of Technology (degree granting institution)","2022","P4 programmable data-planes provide operators with a flexible method to set up data-plane forwarding logic. To deploy networks with confidence, a switch's forwarding logic should correspond with its intended behavior. Programs loaded onto programmable data-planes don't necessarily go through as much testing as traditional fixed-function devices from large manufacturers. Security is therefore of utmost importance.
The main question this research attempts to answer, is whether a single compromised P4 switch can corrupt the entire (P4) network. In this scenario the attacker already has access to the compromised switch, and the assumption is made that all devices blindly trust each other. Two load balancing schemes are investigated, Clove-ECN and HULA. The former performs load balancing on the hosts, and results show that switches can transparently influence traffic flow by manipulating the ECN bits. The latter is designed for implementation on the data-plane, e.g. using P4, and we can conclude that HULA is susceptible to attacks by spoofing probe packets with false data.","Programmable Data Planes; Software Defined Networking; Network Security","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:8b08f640-92e6-401f-bf8f-384e79fcec0d","http://resolver.tudelft.nl/uuid:8b08f640-92e6-401f-bf8f-384e79fcec0d","A study of bugs found in the Ansible configuration management system","Rastenis, Matas (TU Delft Electrical Engineering, Mathematics and Computer Science)","Sotiropoulos, Thodoris (mentor); Spinellis, D. (mentor); Broz, F. (graduation committee); Delft University of Technology (degree granting institution)","2022","Research that focuses on examining software bugs is critical when developing tools for preventing and for fixing software issues. Previous work in this area has explored other types of systems, such as bugs of compilers and security issues stemming from open source systems hosted on public repositories. This paper explores the bugs within the Ansible software provisioning and configuration management system. The main question this paper seeks to answer is ""What common patterns can be extracted from the bugs found and what are the root causes, symptoms, triggers, system-dependence factors, fixes, and the impact of the most frequent types of bugs in the Ansible configuration management system"". This study defines a data pipeline and custom tools to extract and analyze 100 Ansible bugs. Common classifications are determined, and the bugs are manually classified, revealing common patters within the bugs. Insights are drawn from the aggregated data, and recommendations are made for addressing bug-prone areas of execution and connectivity components, and expanding the test suite with input fuzzing and a genetic algorithms test solution, in order to improve the overall code quality of the Ansible code base.","Ansible; Bug Analysis; Software; Code Quality; Configuration as Code; DevOps; bugs","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:dd813e67-8501-47e1-9774-a5a32c054e63","http://resolver.tudelft.nl/uuid:dd813e67-8501-47e1-9774-a5a32c054e63","Method Popularity Distributions of Software Artefacts within Maven Central","Nulle, Thijs (TU Delft Electrical Engineering, Mathematics and Computer Science)","Keshani, M. (mentor); Gerritsen, B.H.M. (graduation committee); Proksch, S. (mentor); Delft University of Technology (degree granting institution)","2022","Even though previous studies have studied software artefacts on a package level, little research has been done on a method level. In this work, we perform a method-level analysis to determine how popularity disperses among methods within software libraries of Maven Central. We analyse 384 software artefacts with three different metrics: eigenvector centrality, degree centrality and dependent usage percentage. Using callgraphs of the interactions of a software artefact with its dependents, we can determine the relative popularity score of any method. We observe that popularity is inverse logarithmically distributed among the most frequently used methods within a library. Furthermore, 80% of calls to a library are to 26% of all methods, following the Pareto Principle. Likewise, the number of dependents per artefacts also follows a power-law distribution. We also find that no significant correlation exists between any of the analysed metrics, allowing opportunities for future research to determine a more accurate popularity metric. All of our results show that method popularity is logarithmically distributed within software artefacts of Maven Central.
In particular, software remodularisation tools focus on improving the code structure quality with minimal effort by suggesting changes to the developers to obtain an improved modularisation.
While there has been considerable research on automated software remodularisation, it often faces one or more of the following three shortcomings.
First, the approach is applied to small or medium-size codebases, raising the question of whether it scales to large codebases. Second, the results are not validated by the developers of these codebases. Last, the algorithm optimises only from a code quality metrics point of view, not considering the perspective and knowledge of developers. In this thesis, we propose an approach to capture developers' domain knowledge of a large-scale object-oriented codebase, which uses an NSGA-III algorithm to suggest remodularisations that improve code structure quality and adhere to developer knowledge. Additionally, the results of the algorithm are validated by the developers. The results in this thesis show that with little effort, the domain knowledge of developers can be captured and used to improve the suggestions made by the algorithm.","Software remodularisation; Genetic algorithms; Software development; Refactoring; NSGA-III; Software modularisation","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:f04f8f0b-9ab9-4f1c-a19c-43b164d45cce","http://resolver.tudelft.nl/uuid:f04f8f0b-9ab9-4f1c-a19c-43b164d45cce","A BDI-based Virtual Agent for Training Child Helpline Counsellors","Grundmann, Sharon (TU Delft Electrical Engineering, Mathematics and Computer Science)","Brinkman, W.P. (mentor); Bruijnes, M. (mentor); Liem, C.C.S. (graduation committee); Tielman, M.L. (graduation committee); Vroonhof, Ellen (graduation committee); Delft University of Technology (degree granting institution)","2022","Around the world, child helplines through their services provide a safe and confidential space for children to be heard and empowered. The Dutch Kindertelefoon is one of such helplines providing counselling services to children via call and chat all year round. In this thesis, we explore the design of a conversational agent for training counsellors of the Kindertelefoon. More specifically, we explore the design of an agent in a role-play setting where the agent acts as a child help seeker and the user, a counsellor of the helpline. We designed a conversational agent based on the Belief-Desire-Intention (BDI) model of agency that simulates a child victim of school bullying. Through interaction with the agent, a counsellor is able to learn the Five Phase Model, the conversation model that underpins the helpline's counselling methodology to ensure conversations remain child-centered. We tested a prototype based on this design with a group of counsellors at the Kindertelefoon with regards to their counselling self-efficacy and perceived usefulness of the system. Our results show that the conversational agent is able to influence the counselling self-efficacy of users, albeit a decrease in self-efficacy. The opposite would have been preferred for a learning tool to enable counsellors achieve more effective performance over time. However, feedback from participants indicate the potential of this conversational agent as an additional learning opportunity for training counsellors at the helpline.","Conversational Agent; belief-desire-intention software model (BDI); Child Counselling; Chatbot","en","master thesis","","","","","","https://osf.io/hkxzc OSF form - Evaluation of a BDI-based Virtual Agent for Training Child Helpline Counsellors","","","","","","","",""
"uuid:c2cc9013-06c7-4c2d-a60a-449c6cd8ed39","http://resolver.tudelft.nl/uuid:c2cc9013-06c7-4c2d-a60a-449c6cd8ed39","VC valuation and multiples: an exploration of comparable analysis of software start-ups","Struyvelt, Loïc (TU Delft Technology, Policy and Management)","Giga, A. (mentor); Ralcheva, Aleksandrina (graduation committee); Scholten, V.E. (graduation committee); van Beers, Cees (graduation committee); Delft University of Technology (degree granting institution)","2022","This master thesis seeks to better understand the investment valuation procedure followed by software venture capitalists (VC) in the European context. I explain how VCs perform fair value estimations of software start-ups with the emerging comparable analysis technique. Furthermore, this study examines the relative importance of start-up characteristics in determining the multiple and how these factors influence the VC’s valuation behaviour. Additionally, I explore whether this behaviour and the multiples paid can be explained by differences in VC firm experience at a time of historically low interest rates and record-breaking fund inflows. Based on 36 interviews with European VCs, primarily from the Benelux region, I find that all start-up characteristics matter in the determination of the multiple, but the management team a little more. As a result, software VCs are willing to pay higher multiples for stellar management teams than for exceptional business characteristics. In contrast with the other characteristics, poor traction does not necessarily kill the deal, but VCs might rather use it to enforce a lower valuation. Overall, VC firm experience is not a strong predictor of the valuation behaviour and ARR multiples paid for deals. However, I do find that more experienced VCs are willing to pay higher premiums for benchmark-exceeding traction than their less experienced counterparts.","Venture Capial; Start-up; software; Valuation; Comparable analysis","en","master thesis","","","","","","","","","","","","Management of Technology (MoT)","",""
"uuid:b20883f8-a921-487a-8a65-89374a1f3867","http://resolver.tudelft.nl/uuid:b20883f8-a921-487a-8a65-89374a1f3867","Code Smells & Software Quality in Machine Learning Projects","van Oort, Bart (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering; ING AI for FinTech Research)","Cruz, Luís (mentor); van Deursen, A. (mentor); Loni, B. (graduation committee); Liem, C.C.S. (graduation committee); Delft University of Technology (degree granting institution)","2021","Artificial Intelligence (AI) and Machine Learning (ML) are pervasive in the current computer science landscape. Yet, there still exists a lack of Software Engineering (SE) experience and best practices in this field. One such best practice, static code analysis, can be used to find code smells, i.e., (potential) defects in the source code, refactoring opportunities, and violations of common coding standards. This research first set out to measure the prevalence of code smells in ML application projects. However, the results from this study additionally showed deficiencies in the dependency management of these projects, presenting a major threat to their maintainability and reproducibility. Static code analysis practices were also found to be lacking. These issues inspired the novel concept of project smells introduced in this research, which consider the ML project as a whole, including not just the code, but also the data, tools and technologies surrounding it and its development. To help ML practitioners in detecting and mitigating these project smells, as well as to help educate on SE principles, techniques and tools, I developed an open-source static analysis tool mllint using input from experienced ML engineers at the global bank and data-driven organisation ING. This tool was then used to evaluate the concept of project smells and how they fit the industrial context of ING in a second study. This second study also investigated obstructions to implementing best practices recommended by mllint, perceptions on static analysis tools and how ML practitioners perceive the difference in importance of mllint's linting rules (by extension, project smells) for proof-of-concept versus production-ready projects. The results indicate a need for context-aware static analysis tools, that fit the needs of the project at its current stage of development, while requiring minimal configuration effort from the user.","code smells; software quality; machine learning; artificial intelligence; project smells; mllint; se4ml; software engineering; context-aware static analysis","en","master thesis","","","","","","","","","","","","Computer Science | Software Technology","",""
"uuid:0276e693-3408-4472-9749-b754c2114183","http://resolver.tudelft.nl/uuid:0276e693-3408-4472-9749-b754c2114183","Fast Numerical Nonlinear Fourier Transform Algorithms for the Manakov Equation","de Vries, Lianne (TU Delft Mechanical, Maritime and Materials Engineering)","Chimmalgi, S. (mentor); Wahls, S. (mentor); Batselier, K. (graduation committee); Delft University of Technology (degree granting institution)","2021","Optical fibers form the backbone of our global data transmission infrastructure. As demands on global data transmission grow the capacity of these systems needs to be increased. The behaviour of light waves through these optical fibers is described by the Manakov Equation (ME), a system of nonlinear partial differential equations.
The ME is an integrable system, which can be solved analytically using Nonlinear Fourier Transforms. Recently, fiber-optic communication systems based on the Nonlinear Fourier Transform (NFT) of the ME have been proposed. Similar to the linear Fourier Transform, which decomposes a signal in linear frequency components, the NFT decomposes a signal in nonlinear frequency components. This nonlinear spectrum consists of a continuous and a discrete part. The continuous spectrum in general constitutes the whole real line. The discrete spectrum consists of distinct points in the complex plane which correspond to so-called solitons, which are stable wave forms. The evolution of the nonlinear spectrum along the fiber is trivial.
The nonlinear spectrum however cannot be computed analytically for most signals and therefore numerical methods are needed. The existing numerical methods have a high computational complexity of O(D^2) for computing the continuous spectrum, with D the number of time samples of the signal. For the Nonlinear Schrödinger Equation (NSE), a simplification of the ME, more efficient numerical methods exist with a computational complexity of O(D log^2(D)). In this thesis we present an extension of these so-called fast NFT methods to the ME. The resulting algorithms are second and fourth-order algorithms based on second and fourth order exponential integration methods respectively.
We developed open source software implementing the fast NFT algorithms for the ME and integrated them in the already existing Fast Nonlinear Fourier Transform (FNFT) software library. We provide detailed documentation and examples which allow other researchers to use the algorithms as tools or as a base for developing new algorithms. We furthermore test the accuracy of the developed algorithms against analytic examples. Of these examples, the rectangle signal and secant hyperbolic signal are new analytic examples for the ME to the best of our knowledge.","nonlinear Fourier transform; Manakov equation; Software Library; fiber-optic communication","en","master thesis","","","","","","","","","","","","Mechanical Engineering | Systems and Control","",""
"uuid:383a6ff6-374b-4fdd-80a3-da3112d0ba05","http://resolver.tudelft.nl/uuid:383a6ff6-374b-4fdd-80a3-da3112d0ba05","Leveraging Design Thinking to Support Internal Agile Software Development: An Opportunity for Nike Technology","Hoogendijk, Celine (TU Delft Industrial Design Engineering)","Nas, D.N. (mentor); Garcia Mateo, J. (graduation committee); Delft University of Technology (degree granting institution)","2021","As agile practices lack a focus on understanding the actual problem, and Design Thinking is assumed to be a promising approach to complement agile practices regarding this lack, this graduation project aims to identify opportunity areas to leverage the Design Thinking methodology in the process of agile software development. The context of focus was a specific technology unit within Nike, Inc.
The main research question is formulated as follows:
‘How might we use Design Thinking to our advantage in the agile software development context of the targeted Nike Technology unit?’
Recognizing that Design Thinking is a contextual concept that needs further adaptation to contextual user needs, literature research and context analysis are done towards Design Thinking, agile software development, and related opportunities and boundaries.
Research findings following the interviews indicate three main areas of concern that form boundaries to problem exploration: having a solution-oriented rather than a problem-oriented mindset, organizational structures that limit the space for problem exploration in terms of time, processes, and the role of technology in the problem exploration phase, and the need and importance of having a clear and aligned vision.
Literature and exploratory research findings are integrated, answering the research question through a conceptual model covering three key principles: problem-oriented and human-centered thinking, dynamic alignment towards strategic fit, and divergent thinking to consider more fit-for-purpose alternatives.
Subsequently, the conceptual model is translated into a usable artifact: a Problem Deep Dive Canvas accompanied by a Problem Deep Dive Tool Guide. The product aims to support product managers and product owners to put the key principles of the conceptual model into practice in collaboration with agile software development teams and business stakeholders.
The threshold to use the product is low as there are no significant conflicts with current structures and processes. Initial validation results are promising towards feasibility, desirability, and viability of the product. Using the canvas on actual requests already showed that the outcomes of the canvas potentially significantly impact the further trajectory of the intended projects.","Design Thinking; Agile Software Development; Problem Exploration","en","master thesis","","","","","","","","","","","","Strategic Product Design","",""
"uuid:f2e7c53b-f088-40e9-907a-a06b02c112fa","http://resolver.tudelft.nl/uuid:f2e7c53b-f088-40e9-907a-a06b02c112fa","Modelling Second Order Effects of Changes in Civil Engineering Projects","Hassan, Yassmin (TU Delft Civil Engineering & Geosciences); Peco, Igor (TU Delft Civil Engineering & Geosciences)","Bakker, H.L.M. (mentor); Bosch-Rekveldt, M.G.C. (graduation committee); van Daalen, C. (graduation committee); van Dijkhuizen, M.J. (graduation committee); Flamink, Cathelijne (graduation committee); Selhorst, Menno (graduation committee); Delft University of Technology (degree granting institution)","2021","One of the most common hurdles especially in large infrastructure projects is scope change. Scope changes negatively affect time, quality and usually lead to cost overruns due to underestimating the change order impacts on the project. There are two main categories through which the impacts could be quantified: the first and second order effects. The first order effect is the impact of the change, cost, time, quality, and risk related. The second order effects are the impacts that resulted as consequences of the first order effects such as lower labour productivity and increase in errors.
The purpose of this research was to analyse and quantify the second order effect due to scope change in large infrastructure projects. The approach that was used to create the model is system dynamics modelling (SDM), since several studies confirmed the success of system dynamics modelling in solving similar projects’ problems in different industries.
Through the literature review and the case study, it was found that the second order effects of scope change are rework, schedule pressure, morale, overtime, productivity, hiring new staff, and office congestion. It was observed through the created dynamic hypothesis that productivity was directly impacted by morale, office congestion, schedule pressure and overtime. The simulated model showed that productivity and morale were the two mostly influenced factors by the scope change.
The main question of ""How could second order effects of the project scope change be quantified through system dynamics modelling?"" was answered as follows: First, a dynamic hypothesis should be created and confirmed through literature and continuous interviews till the hypothesis reflects the project case. Second, formulas should be created for the defined variables and values should be inserted in the model based on the case data. Third, the model should be simulated, and the perceived progress should be compared with planned progress. Then, the first order progress should be compared with the planned progress to quantify the first order of effects of scope change. Finally, to quantify the influence of the second order effect of scope change, the first order effects should be deducted from the total influence of scope change.","System Dynamic Modeling; Second Order Effects; Scope change; Vensim Software; Risk Management; Design Uncertainty; Optimism Bias","en","master thesis","","","","","","","","2023-09-28","","","","Civil Engineering | Construction Management and Engineering","Afsluitdijk",""
"uuid:5faf395e-3df9-41aa-875a-8a74fa0e741a","http://resolver.tudelft.nl/uuid:5faf395e-3df9-41aa-875a-8a74fa0e741a","Enabling domain experts to participate in the process of improving software quality using change impact analysis","Nederveen, Tim (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering)","Proksch, S. (mentor); van de Kamp, V. (mentor); Zaidman, A.E. (graduation committee); Höllt, T. (graduation committee); Delft University of Technology (degree granting institution)","2021","Software engineers often lack the domain knowledge needed to validate context specific parts of software. Domain experts do have this knowledge needed to validate the software, but often lack the expertise and tools to apply this knowledge in a way that tests the software product. Based on a case study at business-software company Exact, this study proposes a method of change impact analysis to help domain experts comprehend the structure of the system and allow them to take part in the code review process by assessing whether the impact of a change is as expected. Evaluation of a developed proof of concept at Exact using common-scenarios and a user evaluation shows that the method is effective in providing insights about the impact of changes to domain experts which provides a good intuition that using change impact analysis can aid domain experts to be involved in the process of improving software quality.","change impact analysis; Software testing; domain experts; software quality","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:fefea2c8-2ea6-4a79-8a80-e737e13d54be","http://resolver.tudelft.nl/uuid:fefea2c8-2ea6-4a79-8a80-e737e13d54be","Facilitating Tap-To-Phone adoption: Towards a portable architecture decision flow for designing a server-based payment architecture","Vissers, Vince (TU Delft Technology, Policy and Management)","Ding, Aaron Yi (mentor); Correljé, A. (graduation committee); van Bergem, R. (mentor); Delft University of Technology (degree granting institution)","2021","","mPOS; MobilePOS; SoftPOS; Software architecture; Payments","en","master thesis","","","","","","","","","","","","Complex Systems Engineering and Management (CoSEM)","",""
"uuid:e2c82316-8f35-4e19-b5ab-29e2dbf93a30","http://resolver.tudelft.nl/uuid:e2c82316-8f35-4e19-b5ab-29e2dbf93a30","FPGA Based Deep Learning Accelerator for RF Applications: A Design Framework","den Boer, Hans (TU Delft Electrical Engineering, Mathematics and Computer Science)","Wong, J.S.S.M. (mentor); Voogt, V (mentor); Delft University of Technology (degree granting institution)","2021","Recently, interest in the use of deep learning technology for RF applications has increased. However, many of these studies are focused on developing deep learning models for a particular RF application. Therefore this master thesis focuses on the implementation of these kinds of deep learning models by using FPGAs such that these deep learning models can be used in an FPGA-based Software Defined Radio.
In this master thesis, a custom FPGA accelerator is designed for CNN models using reusable and configurable building blocks. The accelerator employs a streaming architecture and is fully pipelined, such that it accepts new input data every clock cycle. A key design aspect is that all building blocks in the accelerator are designed to be able to work on a portion of its input data. The implication is that the building blocks can produce an output as soon as enough input data is available. As a result, the work that the building blocks
have to performis spread out over time and thememory required for storing data is also reduced. Moreover, the precision of the fixed point parameters and operations is configurable. Therefore there is no limitation of only specifically supporting binary or ternary operations.
This accelerator has been tested for the automatic modulation classification problem. The result is an accelerator that can process real-time data at 600MHz and consume fewer FPGA resources than other similar initiatives. In a direct comparison with hls4ml, the designed custom accelerator achieves 2.4 times higher throughput and 2.3 times lower latency for the identical CNN, while also achieving the same accuracy and significantly lower resource utilization. In addition, the custom accelerator is compared to a ternary neural
network FPGA accelerator formodulation classification as proposed by Tridgell et al. The custom accelerator uses 3.3 times fewer LUTs, 9 times fewer FFs, 4 times fewer DSPs, and uses no BRAM, while the accelerator proposed by Tridgell et al. uses 48.5% of the available BRAM in an RFSoC FPGA.","FPGA; Deep Learning; Cognitive Radio; Software Defined Radio; Artifical neural networks; FPGA acceleration","en","master thesis","","","","","","","","","","","","","",""
"uuid:c0cb185d-16ae-4829-ac5c-f81b41d6c7aa","http://resolver.tudelft.nl/uuid:c0cb185d-16ae-4829-ac5c-f81b41d6c7aa","Enabling Log Recommendation Through Machine Learning on Source Code","Mikalauskas, Liudas (TU Delft Electrical Engineering, Mathematics and Computer Science)","Barros Cândido, J. (mentor); Aniche, Maurício (mentor); Katsifodimos, A (graduation committee); Delft University of Technology (degree granting institution)","2021","Logging is a common practice in software development that assists developers with the maintenance of software. Logging a system optimally is a challenging task, thus Li et al. have proposed a state-of-the-art log recommendation model. However, no further attempts exist to improve the model or reproduce their results using different training data. In this research, a model was developed using the methods of Li et al. to evaluate its performance when trained on a specific dataset. Some aspects of the model such as feature filtering were studied. It was concluded that the methods of Li et al. are reproducible and can produce a model that performs well with various training data. The study on feature filtering revealed that not filtering features results in an increase of all tested metrics.","Deep learning; Software Engineering; debugging; Software Maintenance; Artificial intelligence; code blocks; logging; logging locations; Neural Network; logging suggestions","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:6260695a-a963-4414-9967-5cb66faf1ba8","http://resolver.tudelft.nl/uuid:6260695a-a963-4414-9967-5cb66faf1ba8","A Static-Based Approach to Detect SQL Semantic Bugs","Ion, Claudiu (TU Delft Electrical Engineering, Mathematics and Computer Science)","Aniche, Maurício (mentor); van Deursen, A. (graduation committee); Lofi, C. (graduation committee); Delft University of Technology (degree granting institution)","2021","While SQL engines are now capable of detecting a large number of syntactic mistakes, most often semantic errors are not detected, which can lead to serious performance issues or even security vulnerabilities being introduced in the system. This thesis proposes a set of 25 validated heuristics together with a new rule-based static analysis tool for detecting the most common types of semantic bugs in SQL queries, based on evidence from previous research. We conduct an empirical study on the prevalence of semantic bugs in SQL on two datasets with queries collected from different open-source industry projects as well as on a large dataset of queries collected from StackOverflow posts. Manual analysis of more than 500 queries shows that our tool is able to detect semantic bugs in SQL queries with an accuracy of 97%. Furthermore, out of all 191,994 collected queries, we identified a total of 36,818 queries which contain at least one semantic bug, meaning that 19.17% of queries contained some semantic problem in their formulation. To the best of our knowledge, this is the largest dataset of SQL queries extracted from StackOverflow and could later be used for subsequent studies as well.","SQL; Static analysis; Semantic bugs; SQL queries; Detection; Software Engineering","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:6c93cf00-2be1-4e60-ab82-e2fc456d657f","http://resolver.tudelft.nl/uuid:6c93cf00-2be1-4e60-ab82-e2fc456d657f","Guiding Big Data Fuzz Testing with Boosted Coverage-Based Input Selection","van den Berg, Bo (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","Özkan, B. (mentor); Decouchant, Jérémie (graduation committee); Delft University of Technology (degree granting institution)","2021","Big data applications are becoming increasingly popular. The importance of testing these applications increases with it. A recently proposed work called BigFuzz applies automated testing. The big data fuzzing tool shows very promising results. The aim of this research is to inspect how coverage guidance affects the performance of big data fuzzing. The current coverage usage is first described, then an extension is proposed, which is compared to the original. This work extends the BigFuzz tool with branch coverage guidance. The existing black-box fuzzer is substituted for a grey-box fuzzer, which is then extended to a boosted grey-box fuzzer. The two extensions both allow branch discovery. Boosted grey-box fuzzing shows to be the most efficient branch exploration mechanic. Furthermore, both extensions outperform the original tool regarding error detection.","Fuzz testing; Software testing; Test generation; Branch coverage; Big Data Analysis; DISC systems","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:6c76c94e-2b89-4712-9125-ccc100f764b5","http://resolver.tudelft.nl/uuid:6c76c94e-2b89-4712-9125-ccc100f764b5","Recommending Log Placement Based on Code Vocabulary","Lyrakis, Kostas (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering; TU Delft Software Technology)","Barros Cândido, J. (mentor); Aniche, Maurício (mentor); Katsifodimos, A (graduation committee); Delft University of Technology (degree granting institution)","2021","Logging is a common practice of vital importance that enables developers to collect runtime information from a system. This information is then used to monitor a system's performance as it runs in production and to detect the cause of system failures. Besides its importance, logging is still a manual and difficult process. Developers rely on their experience and domain expertise in order to decide where to put log statements. In this paper, we tried to automatically suggest log placement by treating code as plain text that is derived from a vocabulary. Intuitively, we believe that the Code Vocabulary can indicate whether a code snippet should be logged or not. In order to validate this hypothesis, we trained machine learning models based solely on the Code Vocabulary in order to suggest log placement at method level. We also studied which words of the Code Vocabulary are more important when it comes to deciding where to put log statements. We evaluated our experiments on three open source systems and we found that i) The Code Vocabulary is a great source of training data when it comes to suggesting log placement at method level, ii) Classifiers trained solely on Vocabulary data are hard to interpret as there are no words in the Code Vocabulary significantly more valuable than others.
First, an introduction to the problem is given together with a description of the tiny house community. After that, the general program of requirements is presented, as well as the requirements of this subgroup. Next, an artificial neural network design is presented, which is used to forecast solar and wind generation and energy demand. The designed dense neural network resulted in predictions with mean errors of 10.11%, 12.56%, and 6.95% as a fraction of the maximum value for solar generation, wind generation, and energy demand, respectively. The predictions functioned as an input for the model predictive controller, which used them to place restrictions on appliances in the community when necessary, to reduce dependency on the main power grid of Rotterdam. Using a mathematical optimization algorithm, a simulation of one year showed that the controller could reduce the grid dependency up to 25%, compared to simulating without the controller. The conclusion summarises the achieved results, discusses whether the requirements are met, and considers possible future works.","control; software; artificial neural networks; model predictive control; microgrids; tiny house; community","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:f8375d5f-3bbd-4559-863b-6951e9d6bab0","http://resolver.tudelft.nl/uuid:f8375d5f-3bbd-4559-863b-6951e9d6bab0","TestAxis: Save Time Fixing Broken CI Builds Without Leaving Your IDE","Boone, Casper (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Katsifodimos, A (graduation committee); Brandt, C.E. (graduation committee); Delft University of Technology (degree granting institution)","2021","The most common reason for Continuous Integration (CI) build failures is failing tests. When a build fails, a developer often has to scroll through hundreds to thousands of log lines to find which test is failing and why. Finding the issue is a tedious process that relies on a developer's experience and increases the cost of software testing. Providing CI build test results with additional context in the developer's local development environment could help solve failing tests more quickly. We propose TestAxis, a test result inspection tool that brings CI test results to the Integrated Development Environment (IDE) offering an experience similar to running a test locally. Moreover, it surfaces additional information that is too expensive to collect in local development, for example, a unique view of the code under test that was changed leading up to the build failure. We implement TestAxis as a plugin for IntelliJ and conduct a user study to evaluate its usefulness and performance benefits. The participants solve programming assignments evaluating the three main features: the test results overview, the test code editor, and the changed code under test display. We show that TestAxis helps developers fix failing tests 13.4% to 30.4% faster. The participants found the features of TestAxis useful and would incorporate it in their development workflow to save time. With TestAxis we set an important step towards removing the need to manually inspect build logs and bringing CI build results to the IDE, ultimately saving developers time.","Continuous Integration; Software Testing; IDE; Software Engineering","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:1345c741-116e-493c-813b-7cb079d5fa84","http://resolver.tudelft.nl/uuid:1345c741-116e-493c-813b-7cb079d5fa84","Development of a Combinator Curve Generator","Sharoubim, Sarah (TU Delft Mechanical, Maritime and Materials Engineering)","Tillack, Thorsten (mentor); Mertes, Paul (mentor); de Vos, P. (mentor); Delft University of Technology (degree granting institution)","2021","Often, vessels have multiple operation modes that are specialised for a certain task. If a vessel is employed with a controllable pitch propeller(CPP), the blade pitch can be adjusted, adding a degree of freedom to the system. This advantage creates the possibility to increase the diversity of operation modes of the vessel and allows for flexibility, precision and specialisation for a certain task. Additionally, CPPs can be controlled such, that operational limits of the driving machinery are not exceeded.
As vessels have become increasingly diverse with respect to their functional abilities, and complex with respect to their propulsion configurations, design of combinator curves becomes increasingly labour intensive. Earlier developed software applications, that aim to support the matching process or combinator settings, lack clear insight of important performance indicators and their impact on the combinator curve design.
In this thesis, a Combinator Curve Generator(CCG) is developed to support the design of combinator curves for vessels that employ CPPs, in order to decrease the labour intensity of the combinator design process. Further, approaches are developed to optimise combinator curves for operation modes of a vessel in terms of four performance indicators; propeller efficiency, cavitation inception, engine efficiency and fuel consumption. The approaches are implemented in the CCG such that a combinator curve can be designed, optimised and evaluated. Additionally, a trip simulation tool is developed and added to the CCG, in order to determine and evaluate the total fuel consumption of a trip for different cruise speeds and a certain time duration, whilst taking into account the distance, the operational profile of the vessel, the combinator settings and the hotel load.
Important recommendations for further development include extension of the database of inception diagrams for propellers with different blade area ratios, and the broadening of the propulsion configuration scope, such that different main engines and power supply systems can be considered. Finally, it is recommended to research the possibility to calibrate the effective angle of attack method on the basis of the optimisation approach proposed in this thesis.
experts through interviews. Following from these interviews, three novel challenges were identified. The first challenge is that the impact should be measured to make a more accurate prediction. The second challenge is combining multiple information sources and the third challenge is the explainability of the decision. Furthermore, two solutions to existing challenges were investigated during the creation of the machine learning system. The first being the suitability of different machine learning models for incident data, as no direct comparison is available in literature. It is shown that Logistic Regression is best suited for this use case while the Support Vector Machine and Neural Network also perform well on incident data. Finally, some findings on the
pre-processing of the incident data are reported. It is shown that assumptions in literature about automatically generated incident data being easier to use, can not always be made and that imbalanced data still remains an unsolved problem as sampling is not suited. The main contribution of this thesis are the insights and challenges in the unexplored topic of major incident detection and general recommendations for handling incident data.","incident management; software analytics; machine learning; major incidents; it service management","en","master thesis","","","","","","","","2021-04-01","","","","Computer Science","",""
"uuid:abfa9cc8-75ba-4dd0-84ed-3ce674445c0d","http://resolver.tudelft.nl/uuid:abfa9cc8-75ba-4dd0-84ed-3ce674445c0d","Predicting True Vulnerabilities from Static Analyzer Warnings in Industry: An Attempt to Faster Releasing Software in Industry","Bisesser, Dinesh (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Cyber Security)","Panichella, A. (mentor); Verwer, S.E. (graduation committee); Lagendijk, R.L. (graduation committee); Delft University of Technology (degree granting institution)","2020","An increasing digital world, comes with many benefits but unfortunately also many drawbacks. The increase of the digital world means an increase in data and software. Developing more software unfortunately also means a higher probability of vulnerabilities, which can be exploited by adversaries. Adversaries taking advantage of users and software vulnerabilities, by stealing data to cause harm, steal money, etc. This makes the digital world a dangerous environment.
To ensure software has a minimal number of vulnerabilities, companies invest in software tools and experts to check their software for vulnerabilities. One such company is ING, the largest bank of The Netherlands. At ING they use Fortify, a static analyzer. The problem with this tool is that it gives many false positives. Therefore, pentesters and developers have to manually check all the warnings given by Fortify, which takes a lot of time and slows down the whole software development process. In this study, we propose to use supervised machine learning techniques to predict true vulnerabilities from static analyzer warnings. Using ING's data from Fortify, two highly imbalanced datasets with code metrics are created on class and method level. Various classifiers and sampling techniques are compared to determine which techniques perform the best. Next to that, we also compared the performance at different levels of granularity. Finally, we also investigate whether a dataset with different types of vulnerabilities performs better than a dataset consisting of only one vulnerability type. From our study, it is clear that Bagging in combination with ClassBalancer gives the best f-measure (0.618) for the class-level dataset, which is slightly good. Random Forest with SMOTE gives the best f-measure (0.412) for the method-level dataset, which we consider weak. Depending on the type of vulnerability, the performance can benefit from a dataset per vulnerability type. Overall, the performance found in this study shows slightly promising results when using Fortify in combination with supervised machine learning, especially compared to only using Fortify.","Static Analysis; Supervised Learning; Fortify; software vulnerability detection; Classification; Code Metrics; Vulnerability Types; Granularity; Closed Source","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:83adfeb9-bbac-488b-a96a-410662b11885","http://resolver.tudelft.nl/uuid:83adfeb9-bbac-488b-a96a-410662b11885","QuTAF: A Test Automation Framework for Quantum Applications","Betanzo Sanchez, Fernando (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Quantum & Computer Engineering)","Wehner, S.D.C. (mentor); Elkouss Coronas, D. (graduation committee); Pawelczak, P. (graduation committee); Kozlowski, W. (graduation committee); Delft University of Technology (degree granting institution)","2020","The testing of quantum applications can be approached from three perspectives. The first one concerns the certification of the accuracy of the quantum device where the application is run. The second one has to do with the classical verification of the result output by the application. Yet a third one addresses the problem from the software engineering perspective. As new quantum applications that run in Noisy Intermediate-Scale Quantum devices are developed, there is an increasing need for tools that can help to find bugs and verify that these applications work as expected. In this thesis we design and develop such a tool. We introduce QuTAF, a test automation framework for quantum applications that is based in the robot framework. To the best of our knowledge, this is the first test automation framework software developed and used for testing quantum applications in a real quantum node. We prove that our QuTAF is capable of detecting minor deficiencies in current state-of-the-art quantum hardware, by running tests for small quantum applications
executed in a networked quantum node. We also simulate and test two different
failure scenarios to validate the capabilities of our QuTAF. We simulate quantum
devices affected by depolarizing and dephasing noise, and find that our QuTAF is able to detect errors introduced by an increase in the depolarization probability, but is otherwise insensitive to the errors produced by the dephasing of quantum states. We also simulate bugs present in the quantum programs, and prove that our QuTAF is able to correctly identify these as failing test cases.","Quantum Computing; Test Automation; Software Testing; Quantum Software Testing; Quantum Applications","en","master thesis","","","","","","","","","","","","","",""
"uuid:b837d6b4-896e-41c0-91e6-b9737249c545","http://resolver.tudelft.nl/uuid:b837d6b4-896e-41c0-91e6-b9737249c545","BTI in SRAM: Mitigation for BTI ageing in SRAM memories","Hamburger, Rens (TU Delft Electrical Engineering, Mathematics and Computer Science)","Hamdioui, S. (mentor); Delft University of Technology (degree granting institution)","2020","The aggressive downscaling of the transistor has led to gigantic improvements in the performance and func- tionality of electronics. As a result, electronics have become a significant part in our daily lives whose absence would be difficult to imagine. Our cars, for example, now consist of many sensors and small computers each controlling certain parts of the car. A downside of the aggressive downscaling of transistor sizes is that it nega- tively impacts the reliability and accelerated ageing, and thus a reduced lifetime, of electronics. Nevertheless, to ensure the reliable operation of electronics, it has therefore become essential to assess the reliability of any of its embedded components accurately. Conventionally, to combat ageing, designers use guardbanded design; adding design margins. These margins, however, lead to a penalty in area, power, and speed. Al- ternatively, one may investigate mitigation schemes that aim at reducing the impact of ageing to extend the reliability and lifetime. These mitigation schemes may lead to a higher performance compared with the con- ventional guardbanded design. This work focuses on an ageing mitigation scheme for SRAMs. SRAMs typi- cally have the highest contribution to the total area of integrated circuits. Therefore, they are highly optimised (i.e. their integration density is the lowest). This also makes them one of the most susceptible components to ageing. Hence, providing appropriate ageing mitigation schemes for SRAMs is essential for the overall reliability of ICs. Whereas prior work has mainly investigated hardware-based ageing mitigation schemes for SRAMs, this thesis investigates the possibility of mitigating the ageing through software. The advantages of this approach include that it does not require circuit changes (and, thus applicable to existing circuits) and it comes at zero area overhead. This study’s proposed software-based scheme is based on periodically running a mitigation routine. This mitigation scheme flips the contents of the memory cells to put the transistors into relaxation from BTI stress, the most crucial ageing mechanism in deeply scaled CMOS process. The results show that the software-based scheme can significantly reduce the ageing of the memory at a low overhead. For example, the degradation of the hold SNM metric of the memory cell is reduced with up to 40% at a runtime overhead of only 1.4%. Moreover, the scheme also mitigates the ageing of other components of the memory. For example, the degradation of the offset voltage of the sense amplifier is reduced by nearly 50%. This thesis shows that it is possible to use software to mitigate the ageing effects in the memory components and it is worthwhile to consider implementing it.","bti; sram; hardware mitigation; Software mitigation; ageing","en","master thesis","","","","","","","","","","","","","",""
"uuid:bf649e9c-9d53-4e8c-a91b-f0a6b6aab733","http://resolver.tudelft.nl/uuid:bf649e9c-9d53-4e8c-a91b-f0a6b6aab733","Machine Learning for Software Refactoring: a Large-Scale Empirical Study","Gerling, Jan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Finavaro Aniche, M. (mentor); van Deursen, A. (graduation committee); Erkin, Z. (graduation committee); Delft University of Technology (degree granting institution)","2020","Refactorings tackle the challenge of architectural degradation of object-oriented software projects by improving its internal structure without changing the behavior. Refactorings improve software quality and maintainability if applied correctly. However, identifying refactoring opportunities is a challenging problem for developers and researchers alike. In a recent work, machine learning algorithms have shown great potential to solve this problem. This thesis used RefactoringMiner to detect refactorings in open-source Java projects and computed code metrics by static analysis. We defined the refactoring opportunity detection problem as a binary classification problem and deployed machine learning algorithms to solve it. The models classify between a specific refactoring type and a stable class using the metrics as features. Multiple machine learning experiments were designed based on the results of an empirical study of the refactorings. For this work, we created the largest data set of refactorings in Java source code to date, including 92800 open-source projects from GitHub with a total of 33.67 million refactoring samples. The data analysis revealed that Class- and Package-Level refactorings occur most frequently in early development stages of a class, Method- and Variable-Level refactorings are applied uniformly during the development of a class. The machine learning models achieve high performance ranging from 80\% to 89\% total average accuracy for different configurations of the refactoring opportunity prediction problem on unseen projects. Selecting a high Stable Commit Threshold (K) improves the recall of the models significantly, but also strongly reduces the generalizability of the models. The Random Forest (RF) classifier shows great potential for the refactoring opportunity detection, it can adapt to various configurations of the problem, identifies a large variety of relevant metrics in the data and is able to distinguish different refactoring types. This work shows that for solving the refactoring opportunity detection problem a large variety of metrics is required, as a small set of metrics cannot represent the complexity of the problem.","Refactoring; software engineering; machine learning; data set; open source; Java","en","master thesis","","","","","","http://doi.org/10.5281/zenodo.4267824 Appendix: Data Analysis and Machine Learning Experiments ShowEdit http://doi.org/10.5281/zenodo.4267711 Appendix: Refactoring Data Set ShowEdit https://github.com/refactoring-ai/Data-Collection Repository link Refactoring Mining Tool ShowEdit https://github.com/refactoring-ai/Machine-Learning Repository link Machine Learning Pipeline","","","","","","Computer Science","",""
"uuid:d01caad2-e537-4a1d-b0ca-c83db77cf1fe","http://resolver.tudelft.nl/uuid:d01caad2-e537-4a1d-b0ca-c83db77cf1fe","A distributed and scalable real-time log analysis","Proost, Rick (TU Delft Electrical Engineering, Mathematics and Computer Science)","Finavaro Aniche, M. (mentor); van Deursen, A. (graduation committee); Katsifodimos, A. (graduation committee); Delft University of Technology (degree granting institution)","2020","Monitoring software behaviour is being done in various ways. Log messages are being output by almost any kind of running software system. Therefore, learning how software behaves from doing analysis over log data can lead to new insights about the system. However, the number of log messages in a computer system grow fast, and analysing the log data by hand is a time-consuming job. The objective of this study is to propose and implement a scalable architecture for doing real-time log analysis. Log data is structured so that analysis can take place, and the solution is horizontally scalable in every module so that the approach can scale with an ever-growing software solution. The focus of the study is on scalability, and ease-of-use of the implementation of the proposed approach. The proposed solution can scale horizontally and the test set up showed that reporting features for anomalies remained instantaneous when processing 1.2 million log lines per minute. The usability of the proposed approach is tested in a case study at Weave, where bugs were found by running the proposed solution in a controlled environment.","Scalable Log Data Analysis; Distributed Systems; Real-time Log Data Analysis; Software Monitoring","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:bc01b393-909c-4b57-a973-820fa4535e03","http://resolver.tudelft.nl/uuid:bc01b393-909c-4b57-a973-820fa4535e03","Software Architecture for a Self-Organizing Logistics Planning System: A continuation study on the SOLiD project","Valdivia, Diego (TU Delft Technology, Policy and Management)","van Duin, Ron (mentor); Ubacht, J. (graduation committee); Tavasszy, Lorant (graduation committee); van Dijk, Bernd (mentor); Delft University of Technology (degree granting institution)","2020","The rise of e-commerce has led to a congested last-mile delivery paradigm. Increasing customer expectations have pushed carriers into a delivery market with diminishing profitability. Furthermore, the current state of last-mile delivery has high societal costs in congestion and environmental impact. To address these challenges, scientists in the logistics field have proposed multimodal transport and collaborative delivery. However, current centralized logistics planning systems are unable to cope with the complexity that these solutions pose. For this reason, Thymo Vlot developed a Self-Organizing Logistics algorithm that leverages decentralization to enable multimodal transport and collaborative delivery. However, the logistics planning system that would utilize this algorithm was left undeveloped, which motivates this thesis project. The first step to develop a software project is to define its architecture. In this thesis project, I analyze Thymo Vlot’s algorithm and develop the software architecture of a logistics planning system that would use it. The main design tool consisted of the selection and application of architectural patterns, which are documented solutions to commonplace problems in software development, drawn from the literature and modern distributed systems. The result of this project is an event-driven microservices architecture, which is highly granular, modifiable, and scalable. These characteristics give the project significant commercial value, as the architecture can be applied to different use cases with different algorithms. The project also leaves behind an architecture with small components, which eases the development cycle of the logistics planning system, thus addressing important managerial challenges. Finally, this project makes significant scientific contributions by developing a software solution that addresses managerial, societal, and environmental challenges of last-mile delivery, thus providing a stepping stone for further research that bridges the gap between the logistics and computer science fields.","self-organized systems; smart parcels; software architecture; Distributed Systems; Logistics; Parcel Delivery; Last mile delivery; Netherlands","en","master thesis","","","","","","","","2022-12-31","","","","Management of Technology (MoT)","",""
"uuid:672634ec-53ee-4ebd-a1f0-c958c646a261","http://resolver.tudelft.nl/uuid:672634ec-53ee-4ebd-a1f0-c958c646a261","Automatically Identifying Parameter Constraints for Complex Web APIs: A Case Study at Adyen","Grent, Henk (TU Delft Electrical Engineering, Mathematics and Computer Science)","Aniche, Maurício (mentor); van Deursen, A. (graduation committee); Poulsen, C.B. (graduation committee); Akimov, A. (graduation committee); Delft University of Technology (degree granting institution)","2020","Web APIs can have constraints on parameters, such that not all parameters are either always required or always optional. Sometimes the presence or value of one parameter could cause another parameter to be required. Additionally, parameters could have restrictions on what kinds of values are valid. We refer to these as inter-parameter and single-parameter constraints respectively. Having a clear overview of the constraints can help API consumers to integrate without the need for additional support and with fewer integration faults.
We developed two approaches for identifying parameter constraints in complex web APIs. One approach uses online documentation to infer inter-parameter constraints, the other depends on static code analysis to extract inter- and single-parameter constraints from the control flow of the API’s source code. In our case study at several APIs at Adyen, the documentation- and code-based approach can identify 21% and 53% percent of the constraints respectively. When the constraints identified by both approaches are combined, 66% of the inter-parameter constraints can be identified. Code analysis is able to identify 78% of the single-parameter constraints.","Software Engineering; Web API; Parameter Constraints; Parameter Dependencies; API Specifications","en","master thesis","","","","","","","","","","","","Computer Science and Engineering","",""
"uuid:c7dc661e-c3f9-4986-bc54-c903aaddbc68","http://resolver.tudelft.nl/uuid:c7dc661e-c3f9-4986-bc54-c903aaddbc68","Towards Engineering AI Software for Fairness: A framework to help design fair, accountable and transparent algorithmic decision-making systems","Lazo, Claudio (TU Delft Electrical Engineering, Mathematics and Computer Science)","Houben, G.J.P.M. (graduation committee); Lofi, C. (mentor); Venkatesha Prasad, R.R. (graduation committee); Delft University of Technology (degree granting institution)","2020","Algorithmic decision-making (ADM) is becoming increasingly prevalent in society, due to the rapid technological developments in Artificial Intelligence. ADM make substantially impactful decisions about people: diagnosing whether we have a disease, what news and which ads we get to see, whether we
are eligible for a job, benefits, a college or a loan, they show us personalized media and news, and steer the car that drives us home. However, ADM brings about ethical, legal and social issues by inheriting and perpetuating human biases, learning to discriminate—even learning gender or racial stereotypes, and lacking transparency and accountability. This unexpected and biased behaviour arises because these software systems are usually built without the specification of fairness requirements (i.e. what fair behaviour is expected of the system). We envision a Software Engineering for Values (SEfV) method that solves this problem.
This study addresses that specification problem, aiming to help practitioners design ADM software for fairness. Using literature in social sciences—specifically organizational justice—the human value of fairness has been conceptualized in regard to ADM. This resulted in a fairness tree with four dimensions (procedural, distributive, informational and interpersonal fairness), which is further specified into 31 fairness norms. Subsequently, the fairness tree is related to current measures of fairness and techniques. Finally, we put forward the Software Engineering for Values (SEfV) framework, based on the principles of Software Engineering and Design for Values, and show how it can be applied to design ADM for fairness.
Experiments were conducted where participants (N = 12) performed a design task (M = 3, 75 requirements specified) and an audit task for a hypothetical loan decision system—using a prototype of the SEfV framework. Participants found the prototype useful for both design as auditing, especially
as a tool for reflecting on fairness considerations. This suggests that a high fidelity version would be useful for practitioners.","fairness; discrimination; bias; algorithmic decision-making; machine learning; software engineering; requirements engineering; Design for values; AI ethics","en","master thesis","","","","","","","","","","","","Computer Science | Web Information Systems","",""
"uuid:80c1b078-b8ca-4c29-b0ba-866fdc5f656b","http://resolver.tudelft.nl/uuid:80c1b078-b8ca-4c29-b0ba-866fdc5f656b","Predicting software vulnerabilities with unsupervised learning techniques","Man, K.W. (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Cyber Security)","Verwer, S.E. (mentor); Panichella, A. (mentor); Lagendijk, R.L. (graduation committee); Delft University of Technology (degree granting institution)","2020","As software is produced more and more every year, software also gets exploited more. This exploitation can lead to huge monetary losses and other damages to companies and users. The exploitation can be reduced by automatically detecting the software vulnerabilities that leads to exploitation. Unfortunately, the state-of-the-art methods for this automated process are not perfect and thus more research is needed to address this issue.
This research was partly done at ING, one of the banks of The Netherlands, in order to find a software vulnerabilities prediction method that is more efficient than their already deployed static code analysis tool Fortify Static Code Analyzer. This report proposes a method to predict software vulnerabilities in code using unsupervised learning methods. The data set is comprised of software metrics of code written by developers of ING, in conjunction with its corresponding label whether the code was vulnerable or non-vulnerable, confirmed by a security expert. Principal component analysis reduced the dimensions of the data set. From here on, the unsupervised learning technique k-means was used to build our prediction model and a distance-based anomaly detection technique was applied to find the software vulnerabilities. This produced poor results. In a final attempt to find better results, k-nearest neighbor was used to build a new prediction model and another distance-based anomaly detection technique was applied. The outcome of this latter method was surprisingly good.","k-means; unsupervised learning; software fault prediction; software vulnerability detection; k-nearest neighbors; Fortify; anomaly detection; clustering","en","master thesis","","","","","","","","","","","","","",""
"uuid:399edf65-69c6-4d20-a3d7-9658ccb9bc17","http://resolver.tudelft.nl/uuid:399edf65-69c6-4d20-a3d7-9658ccb9bc17","Language-agnostic Incremental Code Clone Detection","Gamvrinos, S. (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Deursen, A. (mentor); Finavaro Aniche, M. (graduation committee); Poulsen, C.B. (graduation committee); di Biase, M. (mentor); Delft University of Technology (degree granting institution)","2020","Code duplication is a form of technical debt frequently observed in software systems. Its existence negatively affects the maintainability of a system in numerous ways. In order to tackle the issues that come with it, various automated clone detection techniques have been proposed throughout the years. However, the vast majority of them operate using the entire codebase as input, resulting in redundant calculations and undesirable delays when this process is repeated for every new revision of a project. On the other hand, newer incremental techniques address this by storing intermediate information that can be reused across revisions. However, all these approaches are language-specific, utilizing language parsers to generate more sophisticated source code representations, in an attempt to detect more complex types of clones. As a result, less popular languages, for which finding or building a parser is challenging, are unfortunately not supported.
In this study we propose LIICD, a language-agnostic incremental clone detector, capable of detecting exact-match clones. We assess its performance and compare it with a state-of-the-art commercial-grade detector, found within the Software Improvement Group (SIG). Furthermore, we use a similarity estimation technique called Locality Sensitive Hashing (LSH) in an attempt to extend and improve the original approach. Our experiments result in some interesting findings. Firstly, the proposed incremental detector is very efficient and able to scale well for larger codebases. Additionally, it provides a significant improvement compared to a non-incremental commercial-grade detector. Lastly, our LSH-based extension proves to have difficulties matching our original approach's performance. However, future suggestions indicate how the potential of the technique can be further investigated.","Software Engineering; Software Maintenance; Code Duplication; Language-Independent Incremental Clone Detection","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:85c99db3-8695-42b0-a362-84bd5cd35eb1","http://resolver.tudelft.nl/uuid:85c99db3-8695-42b0-a362-84bd5cd35eb1","Improving the Usability of Enterprise Systems: a Case Study","Pagnan, Riccardo (TU Delft Technology, Policy and Management)","Verburg, R.M. (mentor); Ludema, M.W. (mentor); Delft University of Technology (degree granting institution)","2020","In today’s interconnected and dynamic world, sharing knowledge within a company and streamlining its workflow can be a significant source of competitive advantage. As a matter of fact, the literature consistently shows that firms which invest in Enterprise Systems tend to perform better financially and have better retention rates. However, technologies are not always a panacea, and companies often face challenges in their adoption, implementation or usage.
Starting with an analysis of the literature on knowledge, knowledge management and software usability, this thesis proposes a set of guidelines to improve the usability of Enterprise Systems.
The methodology combines qualitative and quantitative elements, with two rounds of interviews and two questionnaires. The first round of interviews explored the needs of users, while the second round validated each guideline individually. The two surveys were instead sent before and after the implementation of the guidelines in order to measure their impact. The questionnaire used for this procedure is the Software Usability Measurement Inventory, the industry standard to evaluate software usability.
This thesis is a qualitative research and, as such, it is characterised by low external validity. Its most relevant limitation is the fact that it is based on one single case study. However, the methodology followed a strong combination of interviews and analytical surveys, which strengthened the results with a deep qualitative analysis and statistical significance of the findings.
Future research could replicate the same procedure in different companies or via another questionnaire to test the validity of the guidelines. Furthermore, this thesis did not differentiate users based on their software skills, which is an interesting variable that could be investigated.","Enterprise system; Software Usability; knowledge management system; knowledge management; collaborative work","en","master thesis","","","","","","","","","","","","Management of Technology (MoT)","",""
"uuid:61b67c6b-ba61-49d9-830e-750efc2c5e4e","http://resolver.tudelft.nl/uuid:61b67c6b-ba61-49d9-830e-750efc2c5e4e","The Effect of “Good First Issue” Indicators upon Newcomer Developers: Identifying Improvements for Newcomer Task Recommendation","Alderliesten, David (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering)","Zaidman, Andy (mentor); Bidarra, Rafael (graduation committee); Gousios, Georgios (graduation committee); Delft University of Technology (degree granting institution)","2020","The recommendation of tasks for newcomers within a software project throughgood first issues is being done within the domain of software development, such as onGithubplatform. These issues aim to help newcomers identify tasks that are suitablefor them and their level of expertise within the project. This thesis report investigatesthe effectiveness regarding developer onboarding and task completion ofgood firstissues by data mining a set of 105 repositories and manually analyzing at most 30good first issues and 30 initial commits per sampled project. It was found that, althoughgood first issues are effective at developer onboarding, and developers perceivegoodfirst issues as being useful, changes can be made to the types of tasks suggested asgood first issues to match the types of initial contributions made by newcomers. It wasalso found that developers with less than a year of experience favoreddocumentation-related contributions for their first commit to a project.","Good First Issues; Software Engineering; Task Recommendation; Developer Onboarding; Newcomer Task Recommendation","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:3a2002bf-146a-42fe-9123-1e02ee103ee7","http://resolver.tudelft.nl/uuid:3a2002bf-146a-42fe-9123-1e02ee103ee7","Delivering B2B experiences that make Exact stand out: A participatory learning approach to Customer Journey Management","Hagen, William (TU Delft Industrial Design Engineering; TU Delft Design, Organisation and Strategy)","Smulders, Frido (mentor); Kranzbühler, Anne (graduation committee); Langeveld, Sanna (graduation committee); Delft University of Technology (degree granting institution)","2020","As the B2B environment of the Software as a Service (SaaS) industry is changing more rapidly than ever, Exact needs to constantly adapt to these changes and accommodate to the increasingly complex needs and wishes of their customers. Designing a good customer experience is vital in ensuring a sustainable competitive advantage for Exact. In order for Exact to try to adapt to this change in environment, a shift towards customer centric initiatives showed over the past years within its organisation. Among these initiatives is the creation of a Customer Journey Management (CJM) team, about half a year ago, within the Customer Success department of Exact. Their aim is to improve customer satisfaction and reduce churn, by optimising the customer experience of Exact solutions. This project was initiated by the CJM team which asked: “How can Exact bring Customer Journey Management to the next level of maturity?” As CJM is slowly getting up and running, many quick wins can still be achieved in the Post-Sales process they are currently responsible for. It is debatable, however, whether this is the right approach for the long term, whether they have the right influence to make the desired impact and whether the metrics they are being held accountable for are fitting for its ultimate goal: the optimisation of Exacts end-to-end customer journey. In order to reach this goal, close collaboration between different departments is necessary, by empowering them to positively contribute to their part of the customer journey in an integrated way. This can only be achieved through a shared deep understanding of the customer and their journey and if the value of customer journey management is proven for all stakeholders involved. Ultimately, electing a board member responsible for Exacts customer experience in the form of a Chief Customer Officer (CCO) would be advised. The challenges that the CJM team face, identified in analysis, all seem to point towards the conclusion that user-centered design thinking is not embedded (enough) into the veins of the company. This does not mean that it’s employees do not have the customer at heart though, but it seems that often the customer is not understood well enough and thus mistakes are more easily made. In practise this approach seems harm the customer experience, hurting revenues in the long run. As long as the customer experience is not understood deeply enough by employees, Customer Journey Management does not have the means to impact the customer journey end-to-end, as recommended by literature to bring CJM to the next level. Therefore it was chosen to design a system that enables Exact employees to gain a deeper understanding of their customers. The solution was found in the design of a Game Development Guide. It enables the CJM team (and other employees) to simulate the customers experience through playing games. It is a step-by-step guide for its development and provides the process and tools necessary to design, play and evaluate the games based on different customer experiences. Once the game is ready to be played it can stimulate participatory learning by having employees temporarily step into the customers shoes. This creates deeper (richer) understanding of the customer’s perspective, including his/her (tacit and latent) needs, emotions and contextual factors. The resulting deep customer insights can then be used throughout the innovation process to validate assumptions and prioritize solutions, which results in improved Customer Journey Management. This way Exact will be able to better understand their customers, enabling them to design user-centered solutions that truly benefit their customers experiences. This will ensure that Exact will stay on top, giving them a sustainable competitive advantage over their competitors in the long run.","strategic design; customer journey; customer journey management; game design; Exact Software; customer experience","en","master thesis","","","","","","","","","","","","Strategic Product Design","Graduation project",""
"uuid:2ea8acb3-7565-4ac0-b311-2712f900ba80","http://resolver.tudelft.nl/uuid:2ea8acb3-7565-4ac0-b311-2712f900ba80","Enabling the creation and hosting of cooperative online escape events in M.O.R.S.E. without programming experience","Thomas, Wessel (TU Delft Electrical Engineering, Mathematics and Computer Science); Duinkerken, Elwin (TU Delft Electrical Engineering, Mathematics and Computer Science); Groenewegen, Gijs (TU Delft Electrical Engineering, Mathematics and Computer Science); Verlaan, Timo (TU Delft Electrical Engineering, Mathematics and Computer Science); Verboom, Bram (TU Delft Electrical Engineering, Mathematics and Computer Science)","Overklift Vaupel Klein, T.A.R. (graduation committee); Wang, H. (graduation committee); Manenschijn, Jan-Willem (mentor); Delft University of Technology (degree granting institution)","2020","The M.O.R.S.E. system is a tool for creating and managing large escape events, mainly used for local escape events. The tool is designed for only a limited range of puzzle types and styling options because most of the puzzles require physical items in order to solve a puzzle and only the answers have to be entered in M.O.R.S.E. Because of this design, it is really difficult to create online escape experiences, especially rich and immersive ones. It also requires a lot of programming outside of the M.O.R.S.E. system to do so. Raccoon Serious Games , the client, does not have many employees with programming experience and, therefore, it is not feasible for them to create the rich and immersive online escape experiences they want. To be able to create such immersive experiences, we are extending M.O.R.S.E. with editable domains and web pages. Game designers can add domains and web pages to the existing event schedule and then puzzles can be created for web pages. Players can view one or multiple of these domains and for each domain, the active web page will be served. Web pages can be created and stored in the domains, but the actual contents of the web pages still have to be made. Because making web pages is often a programming intensive task, a page builder has been created in M.O.R.S.E. This page builder allows the user to load and save web pages created in the M.O.R.S.E.
system. It uses a drag-and-drop system to place building-block elements inside the web pages and allows for directly visible styling of those elements. Because of this, the user does not need programming knowledge of the underlying implementation of the web pages. It also facilitates the linking between M.O.R.S.E. features and the domains such as puzzles and triggers for buttons. Using the import and export functionality, users can easily copy previous web pages created with the page builder. This is not only limited to internal web pages but can also be used to import external code from outside the page builder. With user-friendly features such as the ability to undo and redo changes, the page builder tries to make creating web pages as easy as possible. An important aspect of the escape games hosted by Raccoon Serious Games is team building. We extend upon this notion by adding roles and a leaderboard screen to M.O.R.S.E., both of which increase the need and opportunity for interaction between players. The addition of roles allows game designers to enforce cooperation in their escape events, by restricting the access to resources required for solving a puzzle to only a subset of the players in a team. This way they have to cooperate and combine their information and resources to solve all puzzles. The addition of leaderboards is also an extra incentive for a player in a team to work together efficiently because this will positively impact their score and, therefore, ranking on the leaderboard.","Escape events; AR; Software Development","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","",""
"uuid:a9740c89-a252-472d-8f6b-8e6cf4a34754","http://resolver.tudelft.nl/uuid:a9740c89-a252-472d-8f6b-8e6cf4a34754","Development of visualization software for parcel delivery algorithms","Cras, L.S. (TU Delft Electrical Engineering, Mathematics and Computer Science); Dahrs, S.R. (TU Delft Electrical Engineering, Mathematics and Computer Science); Gielisse, A.S. (TU Delft Electrical Engineering, Mathematics and Computer Science); Nikkels, L.R.M. (TU Delft Electrical Engineering, Mathematics and Computer Science); Ruiter, J. (TU Delft Electrical Engineering, Mathematics and Computer Science)","Spaan, M.T.J. (mentor); Overklift Vaupel Klein, T.A.R. (graduation committee); Hermans, C.A. (graduation committee); Delft University of Technology (degree granting institution)","2020","Almende B.V., a technologically innovative and research-oriented company, has been working on a new algorithm that optimizes routes for parcel delivery trucks. The algorithm contains novel features, like including the possible use of autonomous vehicles, that are at this moment in time not taken into account in existing route optimization algorithms and thus visualization applications. To this end and to get a more tangible overview of the algorithm’s behavior and performance, they requested to have a customized visualization tool developed. This report describes the process and results of developing such a tool. The tool is presented as a single-page application and has been partly depicted on the cover of this document. The goal of the project is to have a more clear overview of the routing algorithm’s capabilities, by showing its unique features on a map and displaying statistics on the side. In addition, comparing the algorithm to existing ones should provide added insights into the (expected) benefits of the new algorithm. The main purpose of the tool developed in this project is to show insight into the workings of the algorithm and to help with enhancing and developing the algorithm. An added side-bonus is that the tool can also be used to show the performance to various groups of interested parties.","visualization; Software development; delivery scedule","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","TI3806",""
"uuid:5f5ee735-8a28-487b-9bdc-eccd9f380f5f","http://resolver.tudelft.nl/uuid:5f5ee735-8a28-487b-9bdc-eccd9f380f5f","SmartRoads 2.0","Buijnsters, J.L. (TU Delft Electrical Engineering, Mathematics and Computer Science); Hofman, D. (TU Delft Electrical Engineering, Mathematics and Computer Science); Klein Kranenbarg, J.G.P. (TU Delft Electrical Engineering, Mathematics and Computer Science); El Moussaoui, C. (TU Delft Electrical Engineering, Mathematics and Computer Science); Zheng, K. (TU Delft Electrical Engineering, Mathematics and Computer Science)","Gerritsen, B.H.M. (mentor); Chan, K.F. (graduation committee); Wang, H. (graduation committee); Visser, O.W. (graduation committee); Delft University of Technology (degree granting institution)","2020","ScenWise is an innovative company that specializes in data science revolving around traffic management. ScenWise strives to use the newest and best technologies and practices when it comes to web applications, data science and traffic management. The reason for this is that they provide tools to analyse and visualise a variety of situations that occur in traffic management. One such tool is SmartRoads 1.0, which allows users to analyse traffic data and situations via a web application. Unfortunately SmartRoads 1.0 does not perform as desired. Additionally, ScenWise itself has the problem of not being able to integrate previously made products by student groups into their own existing products. During the research aimed to resolve these problems another issue arose; the software development life cycle of ScenWise is very lacking. Research on the SmartRoads 1.0 performance problem showed that the bottleneck of its performance is due to the front-end. The outdated SmartRoads 1.0 front-end was thus replaced with a new and better SmartRoads 2.0 front-end. The integration problem and development life cycle problem are both addressed in the Longterm evolution (LTE) design found in appendix I. This LTE design contains the architecture migration plan. This plan will transform the current software architecture to a Service-oriented architecture (SOA) providing a solution for the current integration problems. A result of the first steps of this architecture migration plan is the Application Programming Interface (API) Gateway, which has been implemented in the aforementioned SmartRoads 2.0. Next to the migration plan, guidelines for ScenWise to improve their software development life cycle are elaborated in the LTE design. In this report the identified problems, their solutions and executions are explained, discussed and evaluated.","Traffic Management; Traffic management system; Performance analysis; Service Oriented Architecture; API; Software Migration; Software Development Lifecycle; Product Integration; Front end; software architecture","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:da6486bd-886c-44ac-832a-f9825f6a2ba8","http://resolver.tudelft.nl/uuid:da6486bd-886c-44ac-832a-f9825f6a2ba8","Automated crash fault localization","Popping, Sven (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Deursen, Arie (mentor); Devroey, Xavier (mentor); Panichella, Annibale (graduation committee); de Weerdt, Mathijs (graduation committee); Derakhshanfar, Pouria (graduation committee); Delft University of Technology (degree granting institution)","2020","Debugging application crashes is an expensive and time-taking process, relying on the developer’s expertise, and requiring knowledge about the system. Over the years, the research community has developed several automated approaches to ease debugging. Among those approaches, search-based crash reproduction, which tries to generate a test case capable of reproducing a given crash to make it observable to the developers, solely based on the stack trace included in the crash report. We believe that this makes crash reproduction the perfect candidate to achieve end-to-end crash fault localization. In this thesis, we explore and empirically evaluate the usage of search-based crash reproduction combined with spectrum-based fault localization on 50 real-world crashes. Starting from a crash report, we generate crash-reproducing test cases and use them in conjunction with the existing or an automatically generated unit test suite as input for spectrum-based fault localization. Our results show that, although, hand-written test cases remain the most efficient in the general scenario, automatically generated crash-reproducing test cases still reduce the number of statements to be investigated by developers. Additionally, when considering the best-case scenario where only crash-reproducing test cases covering the fault are evaluated, we observe no statistically significant difference between the accuracy of fault localization when using hand-written or automatically generated test cases. Our results confirm the feasibility of end-to-end automated crash fault localization. The results also identify new challenges for both automated test case generation and fault localization, as well as when they are combined.","Search-Based Crash Reproduction; Automated Fault Localization; Search-based Software Engineering","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:650d166c-9396-4d9f-94f7-50c10410d40c","http://resolver.tudelft.nl/uuid:650d166c-9396-4d9f-94f7-50c10410d40c","Improving Test Case Generation for RESTful APIs through Seeded Sampling","de Vries, Chiel (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Pawełczak, Przemysław (graduation committee); Delft University of Technology (degree granting institution)","2020","To validate the quality of software, test cases are used. These test cases are often manually-written, which is labor-intensive. To avoid this problem, automated software testing was invented. Search-based software testing is a useful tool for developers to automatically generate test cases. However, improvements are still needed to create test cases that compete with manually-written ones.
EvoMaster is a tool that generates system-level test cases for RESTful APIs using the MIO algorithm. An important aspect of this algorithm is sampling new test cases. Currently, EvoMaster employs random and smart sampling to achieve this goal. This paper aims to improve the coverage of the generated tests by expanding the sampling methods with seeded sampling. This method consists of extracting sequences of HTTP requests from manually-written tests and using these to sample new test cases.
Seeded sampling is evaluated on two RESTful APIs with 7 different parameter sets. We show that the addition of seeded sampling can improve the coverage achieved by EvoMaster compared to the current combination of sampling techniques. Nonetheless, this paper has some limitations. It only takes two RESTful APIs into account and has a small amount of benchmark runs to back its findings.","Search-Based Software Testing; Seeded Sampling; Test Case Generation; RESTful API","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:c58343a1-3700-42c4-bc81-aacca9f54248","http://resolver.tudelft.nl/uuid:c58343a1-3700-42c4-bc81-aacca9f54248","Inferring Personality from GitHub Communication Data: Promises & Perils","van Mil, Frenk (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, Andy (mentor); Rastogi, Ayushi (graduation committee); Delft University of Technology (degree granting institution)","2020","Personality plays a significant role in our lives; it does not only influence what we think, feel, and do, but also affects what we say about what we think, feel, and do. In software engineering (SE), it might help in improving team composition through a combination of personalities within a team, and it could help explain work preferences and work satisfaction. Earlier studies in the field of software engineering have focused on extracting personality from developers with the use of questionnaires and automatic tools such as psycholinguistic tests. Psycholinguistic tests infer personality based on the words people use. As taking questionnaires is time-consuming, the interest in automated tools has grown. However, there is a lack of studies comparing different psycholinguistic models on actual SE data to validate to what extent these tools apply to software engineering. In this study, we compare two well-established academical models proposed by Yarkoni and Golbeck et al. and the popular industrial model Personality Insights by IBM. We use the three models to infer personality from comments on open-source projects on GitHub and compare the found scores to a ground-truth obtained through a questionnaire among software developers. In this study, we establish a baseline and compare three models on their performance to this baseline. We show three methods to perform almost equally when mean-centered, indicating the three methods may work on different scales. We show log-transformations to improve LIWC category scores found by reducing the effect of outliers and give recommendations for thirteen preprocessing steps to improve the inference on SE data. We found 600 to 1200 words per person to provide sufficient accuracy while remaining resource-aware and recommend a minimum of a hundred words for all three methods. Furthermore, we do not find enough evidence for discrimination by all three methods for people proficient in English compared to those who found themselves non-proficient in English. We find existing psycholinguistic models to be most useful for software engineering when used on a group or team level. When used on an individual level, one should take into account possible inaccuracies and consider the potentially harmful impact the misuse or misinterpretation of scores may have on an individual.","Software Engineering; personality inference; psycholinguistic models","en","master thesis","","","","","","Related dataset 4TU.ResearchData http://doi.org/10.4121/uuid:6b648676-26f4-4eb1-89dc-050810909b3b","","","","","","Computer Science","",""
"uuid:98c14ea4-3d69-4a4c-994e-3ef0106346a6","http://resolver.tudelft.nl/uuid:98c14ea4-3d69-4a4c-994e-3ef0106346a6","Preserving Inter-gene Relations During Test Case Generation using Intelligent Evolutionary Operators","Stallenberg, Dimitri (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Pawełczak, Przemysław (graduation committee); Delft University of Technology (degree granting institution)","2020","Randomized variational operators can be very disruptive to the search process, especially when there exist dependencies between the variables under search. Within test-cases, these dependencies exist as well. This makes it interesting to evaluate the benefits of preserving these dependencies during test-case generation.
In this paper, we propose two variants of the Many-Objective Sorting Algorithm (MOSA). The first of which is based on Agglomerative Clustering, ACMOSA. The second is a Gene-pool Optimal Mixing based variant, GOMOSA. ACMOSA and GOMOSA model the inter-gene dependencies and use that model to intelligently perform crossover while preserving key building blocks within individuals. These novel techniques are evaluated in an empirical study and compared to MOSA and the Many Independent Objective algorithm (MIO). This study is composed of several benchmark RESTful APIs for which the algorithms generate test-cases.
The results of the empirical study show that, for 40% of the tested APIs, the novel techniques provide a significant benefit time-wise. For another 40% of the APIs, they perform equally well, and for 20% of the APIs under evaluation they performed worse.","Intelligent Evolutionary Operators; Search-based Software Engineering; Test Case Generation","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:6d8a1835-9054-4e4a-a85f-99ac592978da","http://resolver.tudelft.nl/uuid:6d8a1835-9054-4e4a-a85f-99ac592978da","Unit test generation for common and uncommon behaviors","Evers, Björn (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, Andy (mentor); Panichella, Annibale (graduation committee); Lofi, Christoph (graduation committee); Devroey, Xavier (graduation committee); Derakhshanfar, Pouria (graduation committee); Delft University of Technology (degree granting institution)","2020","Various search-based test generation techniques have been proposed to automate the process of test generation to fulfill different criteria (e.g., line coverage, branch coverage, mutation score, etc.). Despite these techniques' undeniable accomplishments, they still suffer from a lack of guidance coming from the data gathered from the production phase, which makes the generation of complex test cases harder for the search process. Hence, previous studies introduced many strategies (such as dynamic symbolic execution or seeding) to address this issue. However, the test cases created by these techniques cannot assure the full coverage of the execution paths in software under test. Therefore, this thesis introduces common and uncommon behavior test generation (CUBTG) for search-based unit test generation. CUBTG uses the concept of commonality score, which is a measure of how close an execution path of a generated test case is from reproducing the same common and uncommon execution patterns observed during the real-world usage of the software. To evaluate the performance of CUBTG, we implemented it in EvoSuite and evaluated it on 150 classes from JabRef, an open-source application for managing bibliography references. We found that CUBTG managed to cover more common behaviors than plain MOSA in 75% of the cases, and more uncommon behaviors in 60% of the cases. In up to 10% of the cases CUBTG managed to find more mutants seeded by PIT by using method sequences that plain MOSA did not find.","Search-Based Software Testing; Automated Unit Testing; Common Paths Coverage","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:26da088e-25e1-4de4-bfc2-6935e32646ab","http://resolver.tudelft.nl/uuid:26da088e-25e1-4de4-bfc2-6935e32646ab","Fit2Crash: Specialising Fitness Functions for Crash Reproduction","Xiang, Shang (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","Zaidman, Andy (mentor); Panichella, Annibale (graduation committee); Cockx, Jesper (graduation committee); Devroey, Xavier (mentor); Derakhshanfar, Pouria (mentor); Delft University of Technology (degree granting institution)","2020","Software applications inevitably crash, and it is time-consuming to recreate the crash conditions for debugging. Recently, researchers have developed frameworks relying on genetic algorithms, e.g. Botsing, for automated crash reproduction. However, the existing approaches process exceptions of different types as if they were the same. In this thesis, we study how the four most common types of Java exceptions are thrown and define specialised fitness functions for them. We have extended Botsing and carried out an evaluation against 52 real-world crashes from seven various open-source software applications. Our results show that our proposed fitness functions influence both the effectiveness and efficiency, negatively or positively depending on the type of the target exception. This thesis demonstrates how tailoring the fitness functions according to the exception type can improve search-based crash reproduction.","Search-Based Software Testing; Search-Based Crash Reproduction; Genetic Algorithms; Fitness Function","en","master thesis","","","","","","","","","","","","Computer Science","STAMP-project | Botsing",""
"uuid:c5a51168-649e-4126-a56e-12fe7e40162b","http://resolver.tudelft.nl/uuid:c5a51168-649e-4126-a56e-12fe7e40162b","An Extension of CodeFeedr","van der Heijden, Roald (TU Delft Electrical Engineering, Mathematics and Computer Science); van Wijngaarden, Matthijs (TU Delft Electrical Engineering, Mathematics and Computer Science); Zonneveld, Wouter (TU Delft Electrical Engineering, Mathematics and Computer Science)","Katsifodimos, Asterios (mentor); Delft University of Technology (degree granting institution)","2020","CodeFeedr is a Mining Software Repository (MSR) tool designed to efficiently mine massive amounts of streaming data of projects from various sources using Flink’s streaming framework in combination with Kafka. Commissioned by researchers at TU Delft on the field of Data Science and Software Engineering, the goal of this project was to expand further on the product, as it already existed in a development stage. At the start of the project, CodeFeedr consisted of a core pipeline functionality and a limited amount of plugins which process data sources. CodeFeedr-1Up, as this development team calls itself, aimed to achieve two goals: the first goal is increasing the current amount of available plugins, defined by usable software repository sources, to be used by the client; the second goal is to implement a REPL functionality which requests user-friendly SQL-like queries and outputs the queried data stream. Maven, Cargo, NPM and ClearlyDefined have been developed and have extended the CodeFeedr tool. Furthermore, querying on the aforementioned data sources depending on their data structure is possible for sequential pipelines. With user aid and documentation in mind, logical data models of a plugin’s internal structure have been drawn and supplied in the report.","CodeFeedr; streaming analytics; mining software repositories","en","bachelor thesis","","","","","","","","","","","","Computer Science","Codefeedr",""
"uuid:42f9cb1d-18fa-4dd0-9436-39d4d202c2e3","http://resolver.tudelft.nl/uuid:42f9cb1d-18fa-4dd0-9436-39d4d202c2e3","A Monitoring System for Machine Learning Models in a Large-Scale Context","Park, MyeongJung (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Deursen, Arie (mentor); Huijgens, Hennie (graduation committee); Katsifodimos, Asterios (graduation committee); Gousios, Georgios (graduation committee); Delft University of Technology (degree granting institution)","2020","Since building a machine learning model costs a lot while following 9 stages, the automated machine learning model creation became a crucial role in a large-scale context. At the same time, a monitoring system became an essential factor for machine learning models. This thesis presents the monitoring system for machine learning models at ING in an enterprise context with new features required by users. Moreover, the thesis describes a case study of ING, a large global banking company that develops software solutions in-house. We conducted a mixed-methods study, consisting of data collection of the monitoring system and a survey with the users of the monitoring system. Our research shows that challenges found by the actual users of the monitoring system and mapped challenges discovered by the Microsoft study are related to machine learning model monitoring, the perception of the users on the importance of the monitoring system, and the impact of the monitoring system. We found that the monitoring system at ING supports relatively efficient model management in terms of checking model validation and evaluation. Moreover, the users of the monitoring system perceived that it is an important system, and it supports the models regarding quality, the trust of the automated model creation, and usability. Additionally, compared to the existing solution, the monitoring system at ING supports useful model management.","Machine Learning; Monitoring; Software Engineering","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:6adc063c-1f27-42a5-8fd5-28fadad660da","http://resolver.tudelft.nl/uuid:6adc063c-1f27-42a5-8fd5-28fadad660da","Adaptive Formation Control and Semi-Physical Simulator for Multi-Fixed Wing UAVs","Singh, Satish (TU Delft Electrical Engineering, Mathematics and Computer Science)","Baldi, Simone (mentor); Wahls, Sander (graduation committee); van Genderen, Arjan (graduation committee); Delft University of Technology (degree granting institution)","2019","Formation flying is a phenomenon observed very often in the natural world, e.g. birds flying in a flock. The past decade has given a lot of emphasis on the research for the control of autonomous Unmanned Aerial Vehicles (UAVs) of the fixed-wing kind, in an effort to emulate the behavior of natural flocks. Emulating this behavior requires the construction of path following and formation control laws with the capability of adapting to changing situations, in a similar way as natural flocks can do. This thesis is devoted to studying Adaptive Vector Field Guidance laws and Adaptive Formation laws for fixed-wing UAVs. Formation control relies on an adaptive hierarchical formation control method for uncertain heterogeneous nonlinear agents with Euler-Lagrange (EL) dynamics. It is shown that various formations (T-V-Y formations) can be established using this method, tested using a Matlab/Simulink environment. Additionally, a distinguishing feature of this thesis is the development of a 3D-Simulation platform to perform a hardware in the loop (HITL) simulations (i.e. using the control hardware on board of an actual UAV): a Raspberry Pi is used to run the formation control algorithm and to communicate with a Pixhawk Cube autopilot board which contains the low-level control algorithm. The autopilot board is then connected to a 3D Simulator (Gazebo) and Ground Control System (QGroundControl). The proposed HITL platform promises to facilitate the testing and validation of guidance and formation laws in a much more realistic way than a Matlab/Simulink environment can do.","Fixed-wing UAVs; Adaptive formation control; Gazebo; software-in-the-loop; PX4","en","master thesis","","","","","","","","","","","","","",""
"uuid:5788d484-cee0-4807-9b8d-fff7921b4ffa","http://resolver.tudelft.nl/uuid:5788d484-cee0-4807-9b8d-fff7921b4ffa","Design and Evaluation of a Conversational Agent Model based on Stance and BDI providing Situated Learning for Triage-Psychologists in the Helpline of 113 Suicide Prevention","Sirocki, Jeffrey (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Interactive Intelligence; 113 Suicide Prevention)","Brinkman, W.P. (mentor); Liem, C.C.S. (graduation committee); Neerincx, M.A. (graduation committee); Mérelle, Saskia (graduation committee); Delft University of Technology (degree granting institution)","2019","Objective: This thesis aided the 113 Suicide Prevention (113), the national suicide prevention center for The Netherlands, by investigating technical solutions for the helpline, implementing an e-learning prototype housing six suicidal personas within a conversational agent model, and evaluating and analyzing an experiment on its effect, which entailed interactions with one, two, and three simultaneous chats. Methods: The thesis conducted a participant observation with a total of seven triage-psychologists, organized three focus groups including triage-psychologists, managers, and training personnel with nearly forty participants, and administered an evaluation with thirty participants that included six triage-psychologist and twenty-four counselors regarding a prototype to assist in the training of 113's triage-psychologists. Prototype: The system specification provided a prototype with six personas where triage-psychologists can practice against one or many chatbots, or conversational agents, in different situations that pertain to training for 113. The conversational agents design was based upon the Rose of Leary interpersonal stance and the Beliefs, Desires, and Intentions (BDI) design paradigm. The system focused on how a conversational agent must react to triage-psychologists' inputs with respect to the subtleties in interpersonal communication and negotiation as it pertains to the 113 suicide helpline. Results: Evaluation results indicate that triage-psychologists found the learning environment motivational and the events in the environment as socially realistic. With the additional number of chats, counselors experienced an increase in three measurable areas: 1.) mental effort; 2.) situational awareness demand; and 3.) situational awareness supply; even so, counselors were positive about all learning aspects regarding the new software environment. Conclusion: This work identified the natural language processing, the BDI reasoning model plus natural language generation, and the usability and quality of the prototype as three areas of focus for 113 as they continue to improve their management of the helpline, its training, and research on suicide.","interpersonal stance; belief-desire-intention software model (BDI); conversational agent; Rose of Leary","en","master thesis","","","","","","","","","","","","Computer Science | Data Science and Technology","",""
"uuid:575477d4-c723-48d5-9fd3-d503ea10dd56","http://resolver.tudelft.nl/uuid:575477d4-c723-48d5-9fd3-d503ea10dd56","Determining the viability for consumers of autogenerated Software Development Kits for Web APIs","de Leeuw, Jean (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, Andy (mentor); Leonard, Philip (mentor); Chen, Lydia (graduation committee); Delft University of Technology (degree granting institution)","2019","In this age of web APIs serving as the backbone of millions of services on the Internet, the developers aiming to make use of these existing services have to adapt to the developers providing these services. Whenever the services change, the users of the service have to change accordingly in order to keep using them. As the amount of third-party services used by an application grows, the more this process of adapting whenever something changes becomes more and more infeasible. One of the solutions to this problem is the usage of autogenerated Software Development Kits (SDKs). In this thesis we explore and determine the viability of these SDKs from the perspective of the consumer. We accomplish this by conducting an experiment that lets the participants solve tasks using a web API both with and without an SDKs. Their opinions were collected and manually analysed. Several of the participants were also invited for an in-depth interview regarding their opinions on the SDKs. We concluded that autogenerated SDKs are suitable for internal use in situations with low customization of settings. We also concluded that autogenerated SDKs are not suitable for third-party use.","Software Development Kit; Web API; Autogeneration; OpenAPI; Picnic","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:3c7bf465-7bd9-4ce2-b695-cf90569a7b19","http://resolver.tudelft.nl/uuid:3c7bf465-7bd9-4ce2-b695-cf90569a7b19","Fostering a Culture of Research within Agile Processes","Romero Valdes Victoria, Daniela (TU Delft Industrial Design Engineering)","Kuipers, Henk (mentor); Coelen, Jeroen (graduation committee); Delft University of Technology (degree granting institution)","2019","This project focuses on the integration of User Research in a Qualitative approach into an Agile work environment by conducting an in-house Case Study with Werkspot (Home improvement platform in the Netherlands). Customer expectations have hit an all-time high globally (Salesforce Research 2018) and as such, companies are expected to provide experiences beyond products (physical or digital). This has impacted the way companies must operate going forward, by recognizing and addressing customer involvement throughout the development process. The challenge relies in integrating two seemingly contradicting processes Agile development (fast-paced) and Qualitative Research practices (slow-paced) into a seamless operation. The objective is to include the end-users early in the development process. In this way, Werkspot is able to increase the chances of success of product features by implementing a validation phase prior to development process. This project makes a research distinction into Validative Research (concept or idea testing in an attitudinal level) and Explorative Research (learning from users on a behavioural level). Through the Research Case Studies (Section 04: ‘Research in practice’), Validative and Explorative research methods are tested and accelerated to operate under the Agile work setting from Werkspot. The result is a Qualitative Research Process for Werkspot, through this process, the company can continuously involve users in the development of the platform.
makers improving the organization of the IT-department when migrating to a SaaS-environment.","Software-as-a-Service; Cloud Computing; maturity model; resource based view of the firm; organisational change; it-department; resources; capabilities; migration","en","master thesis","","","","","","","","","","","","Complex Systems Engineering and Management (CoSEM)","",""
"uuid:b0b39832-c921-412c-b6f8-9ac4c52b57f6","http://resolver.tudelft.nl/uuid:b0b39832-c921-412c-b6f8-9ac4c52b57f6","Log Differencing using State Machines for Anomaly Detection","Tsoni, Sofia (TU Delft Electrical Engineering, Mathematics and Computer Science)","Verwer, Sicco (mentor); van Deursen, Arie (graduation committee); Finavaro Aniche, Mauricio (graduation committee); Wieman, Rick (mentor); Delft University of Technology (degree granting institution)","2019","Huge amounts of log data are generated every day by software. These data contain valuable information about the behavior and the health of the system, which is rarely exploited, because of their volume and unstructured nature. Manually going through log files is a time-consuming and labor-intensive procedure for developers. Nonetheless logging information can expose the problematic execution of the software, even though the final outcome seem to be normal. Nowadays the automatic analysis of the log files is crucial for detecting problems, but mainly for understanding how the software behaves, which would be beneficial for the prevention of failures and improvement of the software itself. Towards that direction, this project aims the identifications of unexpected executions of the software and the determination of the root cause behind them. In more details, the expected behavior of the software can be approximated using model inference techniques and the newly incoming observed data can be analyzed to verify if they are conformed by the expected behavior. The conformance checking method that will be used is called replay. The incoming traces will be replayed in the graph, at the point they are not validated, the alignment algorithm will take over. The sequence alignment is performed in three different ways. Two of the methods are looking for the best alignment at a specific radius around the problematic node. Additionally a global alignment technique is implemented, which is based on the famous algorithm by Needleman and Wunsch for DNA sequences. Our goal required the modification of the aforementioned algorithm to not only align two sequences, but a sequence with a tree structured model. Finally the implemented tool provides the visualization of the differences in a way that makes it intuitive for the developers to understand what went wrong. Some additional information are also provided to make the investigation of the ""anomaly"" easier.","log analysis; log differencing; anomaly detection; state machines; software engineering; sequence alignment; model checkers; log comparison","en","master thesis","","","","","","","","","","","","","",""
"uuid:2c4dc983-eff0-4300-ab52-3eee37011cb2","http://resolver.tudelft.nl/uuid:2c4dc983-eff0-4300-ab52-3eee37011cb2","Building a scalable development cluster at Adyen","Weterings, Gijs (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology; TU Delft Software Engineering)","Finavaro Aniche, Mauricio (mentor); Zaidman, Andy (mentor); Pawelczak, Przemek (graduation committee); Wolters, Bert (graduation committee); Delft University of Technology (degree granting institution)","2019","Software systems today are growing to incredible proportions. Software impacts everything in our society today, and its impact on the world keeps growing every day. However, developing large software systems is becoming an increasingly complex task, due to the immense complexity and size. For a software engineer to stay productive, it is vital they can work effectively on a system, being able to focus on the problem at hand. However, large software systems throw up a lot of roadblocks on the way, with complex and slow build processes impacting the developer's productivity to higher and higher degrees as the software system grows. To help developers stay productive, we need new, more powerful ways of assisting them during their activities.
In this thesis, we present our development cluster as a part of the solution to developing software at scale. The cluster provides a high-performance infrastructure that can be used by developers to build and deploy their applications during development. By moving these build and deploy processes to a cluster during development, we can benefit from more powerful computing resources which help developers work more effectively. We evaluate our development cluster in a number of different categories, comparing build speed, system startup and general developer workflows. Additionally, we evaluate how well our solution scales and what the impact on network load is for a company integrating with this system.
This move to cloud-based development brings along new challenges, but also many new possibilities in terms of tooling, developer collaboration and software engineering research. We are convinced our cluster can help scale software development efforts in industry, as well as bring new ways of doing research on software engineering.
The pre-existing digital auction is not available as a web application and has generated technical debt over the past twenty years of it is existence.
The main challenge of the project was to make sure the application is capable of sufficiently handling the current load of the auction while maintaining similar performance. This translates to a stable connection with a ping of fewer than 30 milliseconds for clients within the Netherlands.
On top of that, the system had to be scalable to support higher numbers of buyers in the future.
We used a microservice architecture able to balance the load over several servers to resolve this.
We spread the load of communicating with clients to services separate from the main application service.
This allowed the main application service to solely and adequately keep track of the state of the clock and determine the winner of a session.
To validate that we indeed achieved the main goals of the project, we created a simulation that would simulate any number of clients connecting to the clock auction and placing bids.
In this process, we generated buyer and auctioneer behaviour by analysing transaction data. We extracted several distributions from the data and sampled from it to make it more realistic.
In the end, we ran this simulation ten times for chunks of an auction with 610 connected clients.
A few peaks showed up where pings from client to server were significantly higher than usual. However, in the long run, the system showed low standard deviations in ping, meaning the general consistency was high.
Overall, the results we gathered showed that our application was able to deal with 610 connected clients.
In the end, we consider our project to be a success.
First of all, we showed that a clock application in the browser can be implemented with seven weeks of development time.
Secondly, we showed that such an application could handle a realistic amount of traffic without much trouble, given sufficient computing resources.
These two accomplishments show that replacing the current clock application with a web-based application is feasible.","auction; realtime; distributed systems; software; microservices","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","",""
"uuid:2973a0c6-c06a-4c6a-afc2-f68b0924770c","http://resolver.tudelft.nl/uuid:2973a0c6-c06a-4c6a-afc2-f68b0924770c","Investigating Whether Clean Code Helps Developers in Understanding a New Project","Bottema, Rowan (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering)","Zaidman, Andy (mentor); van Gool, Vincent (graduation committee); Spaan, Matthijs (graduation committee); Delft University of Technology (degree granting institution)","2019","When developers enter a project, often a vast amount of existing code exists for them to understand. Improving the understandability of the code should help them in getting up to speed. This study researches two methods that could improve the understandability of the code for newcomers: Refactoring the code to adhere to Clean Code guidelines and providing an introductory document. The effect is measured by performing a controlled experiment in which the participants are given small tasks to complete. The results show an increase in productivity for the participants working in the refactored code, and no effect for the participants who had received the introductory document. This suggests that refactoring the code can be used to aid new developers in projects.","Software Engineering; Refactoring; Clean Code","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:53b9a88a-c848-462f-8514-ea2a4aefd7ff","http://resolver.tudelft.nl/uuid:53b9a88a-c848-462f-8514-ea2a4aefd7ff","Gamification of a Static Analysis Tool: A brief look into developer motivation","Saboerali, Raies (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, Andy (mentor); Finavaro Aniche, Mauricio (graduation committee); Katsifodimos, Asterios (graduation committee); Delft University of Technology (degree granting institution)","2019","Software development is more than only implementing the functional code. A developer is also responsible for writing code measuring up to certain standards and conventions. These conventions make sure that the code is of a particular quality that improves readability and eases maintainability. Some of these conventions are checked by automated tools. Automated static analysis tools (ASATs) perform an analysis of the source code and issue warnings. ASATs are available for many programming languages and can be used to find functional or maintainability issues. Even though these tools have been proven to be useful during the code development process, developers do not always utilize them. The overload of warnings in large projects and relatively low importance of these warnings are one of the many reasons why they are ignored. In this study, a gamification tool, Checkpoint, is developed based on the GOAL methodology. The purpose of this tool is to gamify the development process pertaining to ASATs to motivate developers. The developers are motivated using various gamification elements during a pretest-posttest pre-experimental experiment. The study tested the usability of the tool and its effectiveness. The experiment showed that gamification has an impact on developer motivation.","Automated Static Analysis Tools; Gamification; Software development","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:9660c5a3-6ef8-4c6a-b5cf-3994b60d754b","http://resolver.tudelft.nl/uuid:9660c5a3-6ef8-4c6a-b5cf-3994b60d754b","Releasing Fast and Slow: Characterizing Rapid Releases in a Large Software-Driven Organization","Kula, Elvan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Gousios, Georgios (mentor); van Deursen, Arie (mentor); Katsifodimos, Asterios (graduation committee); Delft University of Technology (degree granting institution)","2019","The appeal of delivering new features faster has led many software projects to change their development processes towards rapid release models. Even though rapid releases are increasingly being adopted in open-source and commercial software, it is not well understood what the effects are of this practice. This thesis presents an exploratory case study of rapid releases at ING, a large banking company that develops software solutions in-house, to characterize rapid releases. Since 2011, ING has shifted to a rapid release model. This switch has resulted in a mixed environment of 611 teams releasing relatively fast and slow. We followed a mixed-methods approach in which we conducted a survey with 461 participants and corroborated their perceptions with two years of code quality data and one year of release delay data. Our research shows that: rapid releases can be beneficial in terms of code reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, higher test coverage and lower average complexity; rapid releases are perceived to be, and are in fact, more commonly delayed than their non-rapid counterparts; however, rapid releases are correlated with shorter delays (median: 6 days) than non-rapid releases (median: 16 days); challenges in rapid releases are related to managing dependencies and certain code aspects, e.g. design debt. Based on our findings we present challenging areas that require further attention, both in practice and in research, in order to move the practice of rapid releases forward.","rapid release; release cycle; release delay; software quality; technical debt","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:a693e01b-b5ad-4412-8f2a-fb1f2237a3e2","http://resolver.tudelft.nl/uuid:a693e01b-b5ad-4412-8f2a-fb1f2237a3e2","Control of a 3-phase motor drive employing a slim DC-link","Sivaram, Samyuktha (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Quantum Computing; TU Delft DC systems, Energy conversion & Storage)","Wong, J.S.S.M. (graduation committee); van Genderen, A.J. (mentor); Dong, J. (mentor); Voogt, Ewout (mentor); Delft University of Technology (degree granting institution)","2019","A variable speed AC motor drive, fed by a 3-phase AC supply, often consists of a 3-phase bridge diode rectifier, a DC link capacitor and a pulse width modulated inverter. Recently, a new type of capacitor known as film capacitor or slim capacitor has become popular for use in DC link. This capacitor has a lower value of capacitance and a longer life span than the conventional electrolytic capacitor. A film (slim) capacitor is advantageous over the electrolytic capacitor for use in the DC link because, for a low power motor, it results in a varying DC link voltage. This produces a less distorted grid current thereby improving the power factor. However, drives with slim DC link fed by a soft grid exhibits the tendency to oscillate at higher frequencies. This can be attributed to the LC resonance between the grid inductance and the small DC link capacitance, which results in significant but unwanted voltage ripples on the DC link. The unwanted harmonics affect the performance of the motor and the current drawn from the grid. As a result, the motor drive does not comply with the IEC 61000-3-2 harmonic standard. The objective of this project is to formulate, model and test a control algorithm to suppress the effects of the LC oscillations. This thesis proposes a novel compensation method that estimates the ideal DC link voltage without the unwanted ripples and feed-forwards the reconstructed DC link voltage to the motor drive algorithm, thereby altering the behavior of the motor drive to be more resistive such that the ripple gets damped. By doing so, the current drawn by the motor from the grid will have lesser harmonic content. Therefore, the power factor of the system will improve and the system will adhere to the harmonic standards.","Control system for slim DC link; Embedded control systems for motor drives; Feed-forward control for harmonic reduction; MATLAB model based software development","en","master thesis","","","","","","","","2021-04-17","","","","Electrical Engineering | Embedded Systems","",""
"uuid:b2c4d7a5-c2e4-4866-a768-0ae041ce061a","http://resolver.tudelft.nl/uuid:b2c4d7a5-c2e4-4866-a768-0ae041ce061a","Towards a Digital User Research Tool: A Digital Workflow of User Research for Software Companies","Lasamahu, Garry (TU Delft Industrial Design Engineering)","Romero Herrera, Natalia (mentor); Creusen, Marielle (graduation committee); Lagendijk, Frank (graduation committee); Delft University of Technology (degree granting institution)","2019","The assignment that is of issue in this graduation project elaborates on Shipright. Shipright is a digital tool meant to be used to capture insights for customer research purposes. It offers a workflow to collaboratively process and analyze feedback, in order to find insights about customers’ experiences with software products and possible directions to improve product design. Shipright is meant to be used by scale-up SaaS companies to conduct user research on their products’ users. SaaS (software as a service) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. The targeted SaaS companies are the small and medium businesses (SMBs = up to 100FTE / up until 10M revenue) and Mid-Market Businesses (100 – 500 FTE / 10M – 100M revenue). For outreach, though, the minimum of company size is set on 10 FTE. This allows start-ups to be included as a secondary target group. Among this target group, user research is defined as the collection of user feedback and the analysis of it to get to actionable tasks. All the actions that goes around product decisions, revolves around product teams with typical key members of Product Managers / Owners, UX-researchers, designers and developers. The problem encountered is that, though rich insights from user research is desired, the fast-paced environment of scale-up companies prevents them from being able to spend enough time and attention on user research. Shipright’s original design is able to help scale-up SaaS companies to collect and organize feedback data from their products’ users, and to turn these data into insights. But, users do not seem to fully understand the different steps to follow throughout this analytical process. Besides that, team collaboration is not supported yet.","Digital technology; Software; User research; User feedback; UX-design","en","master thesis","","","","","","","","","","","","Design for Interaction","",""
"uuid:fbff3741-84f8-496d-94a4-2f99baeb3f42","http://resolver.tudelft.nl/uuid:fbff3741-84f8-496d-94a4-2f99baeb3f42","SAT-ANS: System Analysis Tool for Autonomous Navigation in Space: Integrated Pulsar, Angle, and Radial Velocity Measurements","Jongschaap, Arjen (TU Delft Aerospace Engineering)","Sundaramoorthy, Prem (mentor); Fónod, Róbert (graduation committee); Gill, Eberhard (graduation committee); van der Wal, Wouter (graduation committee); Delft University of Technology (degree granting institution)","2018","","Software; Navigation; Pulsars","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:dfc0360e-283a-4534-96c4-6ed39c16f2a4","http://resolver.tudelft.nl/uuid:dfc0360e-283a-4534-96c4-6ed39c16f2a4","Improving Code Quality in Agile Software Development","Krombeen, Lars (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Engineering)","Hermans, Felienne (mentor); van Deursen, Arie (graduation committee); Brinkman, Willem-Paul (graduation committee); Fraser, Desiree (mentor); Delft University of Technology (degree granting institution)","2018","Agile software development is a popular approach for developing software. Another important topic of research in software engineering is code quality. Unfortunately, a minimal amount of extensive research has been done on how these two influence each other. The goal of this study is therefore to explore the connection between these two using a qualitative approach. To understand this connection we will use Grounded Theory as a qualitative methodology to interview 20 participants across two organisations. In doing so we present a detailed description of Grounded Theory implementation and the results we obtain from it. The results are used to explore the relation between code quality and agile software development. The results show that team empowerment is the core relation between them. The results are structured in a theory which establishes four core values for achieving team empowerment, conditions that apply to these values and which practices can be applied to stimulate the conditions. The outcomes of the study are further verified using an online questionnaire across multiple countries. The theory will be expanded further to establish theoretical links between Agile best practices and code quality metrics to give teams concrete solutions to improve their code quality scores.","Empirical Research; Software Engineering; Grounded Theory; Code Quality; Agile Software Development; Team Empowerment","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:01de822a-c3d1-4442-8edc-3708062a4a84","http://resolver.tudelft.nl/uuid:01de822a-c3d1-4442-8edc-3708062a4a84","Momento: a strategic alignment tool for the Schedule-Tool project team","Bakker, Stijn (TU Delft Industrial Design Engineering)","Santema, S.C. (mentor); de Lille, C.S.H. (mentor); Klitsie, J.B. (mentor); Wiemeijer, Ocky (mentor); Stomph, Sander (mentor); Delft University of Technology (degree granting institution)","2018","","Self-managing teams; Strategic alignment; Software development; Organisational design","en","master thesis","","","","","","","","2019-10-17","","","","Strategic Product Design","",""
"uuid:4b27e4e5-8ccb-42a7-83d6-cccd8cdff288","http://resolver.tudelft.nl/uuid:4b27e4e5-8ccb-42a7-83d6-cccd8cdff288","Faster Onboarding of Developers in Existing Codebases","van den Oever, Sander (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, Andy (mentor); Finavaro Aniche, Maurício (graduation committee); Liem, Cynthia (graduation committee); Van, W. (graduation committee); Delft University of Technology (degree granting institution)","2018","When a new software developer joins bunq, a Dutch bank, (s)he will need some time to get familiar with the existing codebase. Preferably, the time necessary for the familiarisation is as short as possible. The faster the developer is up to speed, the faster (s)he can contribute to new features and help solving bugs. This research develops a prototype for a tool that aims to support developers in the comprehension of the PHP backend of bunq. This tool has been evaluated by interviewing multiple developers. Furthermorewe asked the developerswhether the toolwould be able to replace a senior developer regarding question asking. During the interviews feedback was acquired on the current prototype. New developers found the tool useful, although there was also room for improvement. More experienced developers indicated that they found the tool less useful, but saw opportunities to use the tool for more managerial-like tasks.","Software development; onboarding; visualisation; software comprehension","en","master thesis","","","","","","","","","","","","","",""
"uuid:70d4e92f-aa1a-4034-b5dd-c121e015f928","http://resolver.tudelft.nl/uuid:70d4e92f-aa1a-4034-b5dd-c121e015f928","Increasing Operational Awareness using Monitoring-Aware IDEs","Winter, Jos (TU Delft Electrical Engineering, Mathematics and Computer Science)","Finavaro Aniche, Maurício (mentor); van Deursen, Arie (graduation committee); Brinkman, Willem-Paul (graduation committee); Borlovan, Călin (mentor); Delft University of Technology (degree granting institution)","2018","It is important to detect problems fast and to have a clear overview of what is happening within a system after deployment to maximize the uptime and functional quality of the system. Therefore it is necessary to increase the awareness that developers have of errors and logs. Increasing the awareness that developers have of errors and logs has a positive impact on finding problems and solving them. This thesis aims to use monitoring information to bridge the gap that exists between development and operations. We propose to do this by linking the logs to source code to provide this missing link between development and operations. We provide a theory for a monitoring-aware IDE which aims to tackle some of the challenges and enhance some of the practices that exist in the field. We implemented a monitoring-aware IDE and performed a field study to measure its effect. Our results show that a monitoring-aware IDE assists the developer in understanding the system, fixing problems in the code, and improving the monitoring code.","Software Engineering; Runtime Monitoring; DevOps; Modern Monitoring","en","master thesis","","","","","","","","","","","","","",""
"uuid:b00d5c26-4371-455d-88ec-70da45e4c7f7","http://resolver.tudelft.nl/uuid:b00d5c26-4371-455d-88ec-70da45e4c7f7","Exploring the Organizational Context around Agile Software Delivery","Rustema, Roeland (TU Delft Civil Engineering and Geosciences)","Bakker, H.L.M. (mentor); Bosch-Rekveldt, M.G.C. (mentor); Steenhuisen, B.M. (mentor); van Nierop, Martijn (mentor); Delft University of Technology (degree granting institution)","2018","Agile management has revealed itself as a management approach that copes with an unclear product scope and fast-changing circumstances. This approach has gained popularity by projects in fast changing environments, such as the information technology (IT) industry. Nevertheless, many companies that adopted agile methods are still structured according to a traditional, non-agile form of organization. Projects with an agile project management approach embedded in a non-agile organization might face numerous difficulties. What these difficulties exactly encompass is not fully understood yet and therefor this research strives to fulfil the following objective: “Explore the interaction between agile project management and its organizational context.”
In this way this research aims to contribute to literature about the implementation of agile within an organization. Next to that, this research provides insight to organizations about how an organization is best aligned with agile. This research uses the term software delivery, since this includes projects as well as on-going activities. This results in the following research question: “What kind of adjustments can an organization make to better facilitate agile software delivery?”
The literature study resulted in a compilation of eleven aspects that are relevant to examine the interaction between agile software delivery and the organizational context. These aspects are used as guidance during the interviews. The interaction of agile with the organizational context is discussed with the interviewees in relation to these eleven aspects, in that way creating an image of the alignment of agile within the organization.
Multiple commonalities are observed over the different cases. Interpretation of the results of the cross-case analysis resulted in three patterns.
The first pattern shows how organizations tend to focus on team level when implementing agile and have the tendency to neglect the organizational adjustments around teams. The organizational surrounding is in this research described as the governance structure around the teams and includes the division of tasks, responsibilities and other governance mechanisms. Several of the interview observations can be explained by an insufficient adjustment of the governance structure.
A second pattern is to what extend agile is understood and how it is interpreted. Some of the interview observations can be explained by an insufficient understanding of agile. Adjustment of the governance structure around the software delivery teams should be aligned with agile concepts.
The third pattern shows that several observations can be explained by the fact that change needs time. Every organization undertakes a transition when implementing agile software delivery to change the governance structure and to understand agile. Alignment of agile within the organization depends on the stage an organization is in during this transition.
Based on these patterns, this research concludes that an organization could consider adjusting its governance structure to better facilitate agile software delivery. When making these adjustments, a sufficient understanding of agile is required to ensure that adjustments of the governance structure are aligned with agile software delivery. Next to that, the implementation of agile and the adjustments of the governance structure can be considered as a transition that needs to be managed pro-actively.
During this graduation project, I have explored how can end-users be better involved in the development process, and how should the involvement be planned so that essential feedback is gathered and effectively put into use, and shared with end-users.
As a Case Study, I collaborated with the AerData, a software company located in The Netherlands, which develops sets of software for the aviation industry. AerData was experiencing similar problems to the ones that many other Agile companies have when dealing with user integration. The research of this project explores user involvement in Agile environment from several perspectives. The phases of the research comprise: A) a literature review of Agile and User-Centered Design, the main problem of their integration as well as the current existing solutions where Agile and User-Centered Design are successfully combined. B) an analysis of how AerData is applying Agile, and how they are involving users for the creation of the Product Backlog, and how the users experience the involvement. C) An exploration of which information is gathered during the involvement, and how this information is being integrated in the Sprint activities. D) an overview of what areas a high-quality user involvement should address.
After the research was performed, several ideations were made, that were targeting different phases of the user involvement process. These ideas were followed by evaluation sessions with the team members and with the end-users, where the process, the tools needed, the interaction qualities, the steps and the stakeholders were refined.
With all these information, a final design was proposed to AerData: The User Coach for Agile Companies. This process and set of methods comprises all the phases that the company should follow in order to plan and prepare the customer event, gather feedback during the customer event, and analyse the feedback so it can be shared with the end-users and with the development team, until it is finally implemented. The User Coach provides with an explanatory booklet of the process, the templates needed for each phase, and consultation cards that will coach the team members in giving the information needed to achieve the goal of each session. Ultimately, the goal is to allow the team members and end-users to keep track of the feedback and create a connection among them.
In general, the process allows AerData and Agile companies to improve the experience regarding the user involvement, and the relationship with the end-users. Further research should be done regarding the dynamics revolving around the categorized and online feedback, its maintenance and prioritization. A final evaluation should be made in order to understand the relationship among each of the phases and the results obtained with the development activities, and to finally undercover its tangible benefits.","Agile Software Development; User Involvement; Process design; Toolkit; User-centered design; Scrum","en","master thesis","","","","","","","Campus only","","","","","Design for Interaction","",""
"uuid:b27bfe45-d983-441c-978f-99a33fdeb714","http://resolver.tudelft.nl/uuid:b27bfe45-d983-441c-978f-99a33fdeb714","Motion Reference Unit Testing Platform: Software","Benders, Dennis (TU Delft Electrical Engineering, Mathematics and Computer Science); Burgers, Bastiaan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Remis, R.F. (mentor); Nieuwenhuizen, Frank (mentor); Lager, I.E. (graduation committee); Aubry, P.J. (graduation committee); Delft University of Technology (degree granting institution)","2018","The Motion Reference Unit (MRU) is an important component in the Ampelmann Operation B.V. systems. In order to assess the performance of different MRUs a test system is developed. By using a one Degree of Freedom rail, wave motions can be simulated in the sway, surge and heave direction of a ship. The test system is divided in three parts: hardware, software and MRU assessment. This thesis focuses on the software design and implementation of the system. It turned out that the software performed well enough for the test system. However, due to limited project time, not all designed functionality could be implemented.","Testing platform; MRU; Software; Real-time application","en","bachelor thesis","","","","","","","","2023-07-13","","","","Electrical Engineering","Bachelor graduation project Electrical Engineering",""
"uuid:e9bdfac5-0c2f-4c85-b9ec-403c88cac696","http://resolver.tudelft.nl/uuid:e9bdfac5-0c2f-4c85-b9ec-403c88cac696","Creating an automation tool for customer journey experts at ING","Kluiters, Thomas (TU Delft Electrical Engineering, Mathematics and Computer Science); Overweel, Leon (TU Delft Electrical Engineering, Mathematics and Computer Science); Vos, Daniël (TU Delft Electrical Engineering, Mathematics and Computer Science); Vos, Jelle (TU Delft Electrical Engineering, Mathematics and Computer Science)","Zaidman, A.E. (mentor); Wang, H. (graduation committee); Visser, O.W. (graduation committee); Markslag, Han (graduation committee); Purmer, Kyra (graduation committee); Brand, Jesse (graduation committee); Delft University of Technology (degree granting institution)","2018","ING wants to offer their customers the best experience possible. To achieve this goal, ING’s Customer Journey Experts (CJEs) constantly map and analyze the way customers use ING services in a Customer Journey Map. These maps however, are hard to share and collaborate on. ING needs an online tool in which they can, together with multiple people, build and maintain Customer Journey Maps. During our research phase we visited many different squads and found out that no single solution fits all needs. That is why we made our tool as customizable as possible with features such as: colors, text decorations, highlighting and templates. We worked in bi-weekly sprints for which we selected work from a top 50 issues board that we ordered by importance and difficulty. The final product, Mapp , allows CJEs to define, share and collaborate on customer journeys. CJEs can illustrate their customer’s steps using text, images, emotions, checkboxes andtimelines. TosharetheirworktheycanexportasPDFandprintinanysize. Andfinallytocollaborate they can simply share their journey’s URL. The product was user validated during a large midterm and endterm test, as well as during short weekly tests. All of the chapter leads we talked to were super excited and are soon marketing the product in their teams!","Polymer; Java; Hibernate; Postgres; Agile; Software; Collaboration; Realtime","en","bachelor thesis","","","","","","","","","","","","","Customer Journey Tool Mapp",""
"uuid:0120a2ad-a153-4740-9ee5-067726b506dd","http://resolver.tudelft.nl/uuid:0120a2ad-a153-4740-9ee5-067726b506dd","Dynamic Distribution through the city of Amsterdam","Mostert, Chris (TU Delft Electrical Engineering, Mathematics and Computer Science); Schröder, Casper (TU Delft Electrical Engineering, Mathematics and Computer Science); Eysbach, Jelle (TU Delft Electrical Engineering, Mathematics and Computer Science); Bakx, Ilja (TU Delft Electrical Engineering, Mathematics and Computer Science)","Spaan, Matthijs (mentor); Visser, Otto (graduation committee); Wang, He (graduation committee); Delft University of Technology (degree granting institution)","2018","This report explains the design choices, implementation and results of a software engineering project commissioned by MakeTek. The team was tasked with making a system that could solve bike delivery scheduling problems with limited bike carrying capacity, time windows, dynamic delivery additions, movable pickup points and delivery time estimates.
The project started with two weeks of research, which concluded that the main algorithm should be a Genetic Algorithm (GA). The final product contains an app for visualising results as well as a backend that runs the genetic algorithm. Both systems are written in TypeScript and built upon a boilerplate provided by MakeTek. To calculate routes an open source routing provider is used, and the project is executed with an agile development workflow to ensure the product conforms to the client's expectations.
The implemented system contains almost all desired features and the missing ones were chosen to not be included due to time constraints and limited functional benefits. It is well-tested, extensible and adheres to software quality standards.
A solution is constructed by first clustering deliveries by location to distribute them over the bikes, then the genetic algorithm is run on each cluster separately. At the end of every generation of the GA the intermediate best solution is saved in a database. This ensures a valid solution is available at all times. Finally, postprocessing can be run on the final solution which checks if it is more efficient for a deliverer to wait between certain deliverers to prevent them from being early.
Testing shows that the implemented system and its features work as intended on different datasets. The effect of different parameters on the performance of the genetic algorithm is explored, with the following conclusions:
The algorithm should have enough opportunity to explore the solution space. This is achieved by setting an appropriate mutation parameter.
No significant performance differences are present between single point, two-point and uniform crossover.
Tournament selection converges faster than roulette selection, but explores less of the solution space.
The resulting system successfully implements all of the requirements as set by the client. It includes an algorithm that converges to an optimum, it can adapt to new changes and a solution can be requested at any time during runtime. It can be concluded that the project is brought to a successful end.","Pickup and Delivery problem; Genetic Algorithm; Software Engineering","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:952e6ba8-8316-416a-9476-33cf8881f8e2","http://resolver.tudelft.nl/uuid:952e6ba8-8316-416a-9476-33cf8881f8e2","Zesje: Web-based paper exam grading system","Cleintuar, Nick (TU Delft Electrical Engineering, Mathematics and Computer Science); van der Krieken, Justin (TU Delft Electrical Engineering, Mathematics and Computer Science); Mahabier, Jamy (TU Delft Electrical Engineering, Mathematics and Computer Science)","Akhmerov, Anton (mentor); Finavaro Aniche, Mauricio (graduation committee); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2018","Grading can be a very time-consuming activity for teachers. For this reason, numerous tools exist to aid teachers in grading. One of these tools is Zesje, a web application that allows electronic grading of paper-based exams. A major drawback of Zesje was that teachers were required to create their exams with LaTeX, a popular document typesetting language. Having to use LaTeX meant two things: not being able to use other software to create exams and poor performance when compiling the LaTeX source. This report outlines the research, development, and evaluation of a project that involves the deprecation of the LaTeX template from the software stack of Zesje.","Education; Grading; Image processing; Software Engineering","en","bachelor thesis","","","","","","","","","","","","Computer Science","Bachelor End Project",""
"uuid:cca5e4ea-3d00-4ae3-877a-b302829e7f08","http://resolver.tudelft.nl/uuid:cca5e4ea-3d00-4ae3-877a-b302829e7f08","Schaapi: Early detection of breaking changes based on API usage","Abrahams, Joël (TU Delft Electrical Engineering, Mathematics and Computer Science); Andreadis, Georgios (TU Delft Electrical Engineering, Mathematics and Computer Science); Boone, Casper (TU Delft Electrical Engineering, Mathematics and Computer Science); Dekker, Florine (TU Delft Electrical Engineering, Mathematics and Computer Science)","Aniche, Maurício (mentor); Katsifodimos, A (graduation committee); Delft University of Technology (degree granting institution)","2018","Library developers are often unaware of how their library is used exactly in practice. When a library developer changes the internals of a library, this may unintentionally affect or even break the working of the library users' code. While it is possible to detect when a syntactic breaking change occurs, it is not as easy to detect semantic breaking changes, where the implicit contract of a functionality changes, sometimes unbeknownst to the library developer. Because library users rarely test the behaviour they expect of the library, neither the library developer nor the library user will be aware of the new behaviour.
As a library developer, you want to be able to see how a change in your library will affect your users before a new version of the library is deployed. More specifically, you want to gain insight into how users use the library, and want to see if and how changes affect users. This will allow you to determine whether the new version of the library is backwards compatible. Finally, after deploying the breaking changes, you want to notify the affected users of the changes and of a solution to the issue.
Schaapi, a tool for early detection of breaking changes based on API usages, addresses these needs. It mines public repositories for projects using a given library, analyses their usage of the API of that library, and generates tests that capture this behaviour. Finally, it offers a continuous integration service that automatically executes these tests against new versions of the library and warns developers of any potentially breaking changes in functionality. The tool has also been validated against real-world data to demonstrate its performance in realistic usage scenarios and to answer a selection of related research questions.","API; Breaking Changes; Mining Software Repositories; Continuous Integration; Software Library","en","bachelor thesis","","","","","","","","","","","","Computer Science","Bachelor End Project",""
"uuid:832a88a1-95b3-4c58-8318-946913bb2932","http://resolver.tudelft.nl/uuid:832a88a1-95b3-4c58-8318-946913bb2932","A plug-in infrastructure for the CodeFeedr project","Kuijpers, Jos (TU Delft Electrical Engineering, Mathematics and Computer Science); Quist, Joris (TU Delft Electrical Engineering, Mathematics and Computer Science); Zorgdrager, Wouter (TU Delft Electrical Engineering, Mathematics and Computer Science)","Abeel, T.E.P.M.F. (mentor); Gousios, G. (graduation committee); Wang, H. (graduation committee); Delft University of Technology (degree granting institution)","2018","CodeFeedr is a research project at the software engineering division of the Delft University of Technology in collaboration with the Software Improvement Group. The research focuses on a software infrastructure which serves software practitioners in utilizing data-driven decision making. Currently, frameworks like Apache Flink are capable of high-performance data streaming. However, these frameworks have a lot of overhead in setting up, and adding new streaming queries takes a lot of time. They also have several limitations in combining real-time data with historical data and doing aggregations on streams from multiple sources. The developed product is a plug-in framework on top of Apache Flink, that provides a pipelining system for streaming queries. This product includes abstractions for well-known sources like GitHub, TravisCI and Twitter as well as support for historical data in mongoDB. With this framework the users can spend their efforts on actually writing streaming queries instead of setting up environments, input sources and output destinations. The product also includes orchestration tools for running streaming jobs on a distributed system.
Previous research stated how ordering code changes based on their relations may constitute an effective way to support reviewers. Based on this premise, this work focuses on studying how this ordering theory may be applied in practice. As result, a tool that automatically orders the modifications in a commit has been created.
Moreover, the tool has been tested and an initial investigation of the perceived
usefulness of its relations has been conducted. Finally, it has been investigated if the ordering produced by the tool is identified as useful by the developers and which factors may influence this choice.","Software Engineering; Code Review; Tool","en","master thesis","","","","","","","","","","","","Computer Engineering","",""
"uuid:4109b0a6-b892-40e4-a4db-9848c24219f6","http://resolver.tudelft.nl/uuid:4109b0a6-b892-40e4-a4db-9848c24219f6","Exploring DDU in Practice","Ang, Aaron (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Deursen, Arie (mentor); Lima Maranhao De Abreu, Rui (graduation committee); Zaidman, Andy (graduation committee); Witteveen, Cees (graduation committee); Delft University of Technology (degree granting institution)","2018","The quality of test suites is commonly measured using adequacy metrics that focus on error detection, like test coverage. However, the diagnostic performance of spectrum-based fault localization techniques, that can potentially reduce the time spent on debugging, rely on diagnosability of test suites --- the property of faults to be easily and precisely located. Therefore, in prior work, Perez et al. proposed a new metric, called DDU, that measures the diagnosability of test suites. However, DDU is not yet usable in practice due to its output value between 0 and 1. A developer would not know what test to write next given a certain DDU value. In this study, we explore the performance of DDU in practice by analyzing open source projects. We find no evidence that DDU is correlated to diagnosability and, thus, DDU is currently only useful when combined with test generation techniques.","software fault localization; diagnosability; DDU","en","master thesis","","","","","","","","","","","","","",""
"uuid:16923acd-f1af-4816-b266-7dd9b38d475f","http://resolver.tudelft.nl/uuid:16923acd-f1af-4816-b266-7dd9b38d475f","Online Survivability in Software Defined Elastic Optical Networks","He, Lina (TU Delft Electrical Engineering, Mathematics and Computer Science)","Kuipers, F.A. (mentor); Delft University of Technology (degree granting institution)","2018","With the high and varying demands for network bandwidth, Elastic Optical Networks (EON), as a promising solution for future optical transport networks, have been getting increased attention due to its flexibility and efficiency. As a huge amount of data is transformed over these networks, even short failures will lead to major data loss. Network survivability is, thus especially crucial in EON. Two main criteria for evaluating the survivability performance of optical transport networks are recovery time and resource efficiency. Taking resource efficiency into consideration, we propose a multiple-backup-path protection scheme for traffic over super-channels which is the trend in EON. As the proposed multiple-backup-path protection scheme is more suitable for traffic over super-channels, we classify the traffic based on their bandwidth requirements. For different classes, we propose a survivability scheme named Hybrid Single and Multiple Backup Protection (HSMBP), which combines the single and multiple path backup protection.
Because Software Defined Networking (SDN) is an ideal architecture that allows easy control and flexibility of EON, we realize HSMBP under its architecture. Furthermore, we test this hybrid scheme in reference networks online and compared to other two protection schemes. Our simulation results show that HSMBP can effectively improve the network performance by reducing Bandwidth Blocking Probability (BBP).","Elastic Optical Networks; Survivability; Software Defined Networking","en","master thesis","","","","","","","","","","","","Electrical Engineering | Embedded Systems","",""
"uuid:9a437364-c865-4d8b-90c5-b598c57841f5","http://resolver.tudelft.nl/uuid:9a437364-c865-4d8b-90c5-b598c57841f5","Entrepreneurs and Accountants: Vision 2025","Mathur, Gaurav (TU Delft Industrial Design Engineering; TU Delft Product Innovatie Management)","Kleinsmann, Maaike (mentor); Price, Rebecca (mentor); Delft University of Technology (degree granting institution)","2017","Based on research with entrepreneurs, accountants and technology experts, a vision for the future of entrepreneur-accountant relationship is articulated. This vision is brought to life by the 'Centaur' product concept. Centaur uses Machine Learning to generate actionable financial insights. Voice commands make interaction seamless, and Dynamic Emotion Graphs encourage a deeper connection between accountants and entrepreneurs.","Vision in Design; Technology Strategy; Entrepreneurship; Accounting; Digital Design; Exact Software","en","master thesis","","","","","","","Campus only","","","","","Strategic Product Design","",""
"uuid:fc0cf997-4900-435c-b213-00e5828490de","http://resolver.tudelft.nl/uuid:fc0cf997-4900-435c-b213-00e5828490de","A Case for Deep Learning in Mining Software Repositories","Nijessen, Rik (TU Delft Electrical Engineering, Mathematics and Computer Science)","Gousios, G. (mentor); Hauff, C. (graduation committee); van Deursen, A. (graduation committee); Delft University of Technology (degree granting institution)","2017","Repository mining researchers have successfully applied machine learning in a variety of
scenarios. However, the use of deep learning in repository mining tasks is still in its infancy.
In this thesis, we describe the advantages and disadvantages of using deep learning in mining software repository research and demonstrate these by doing two case studies on pull requests.
In the first, we train neural models to predict, on arrival, whether a pull request is going to be merged or not.
In the second, we train neural models to answer the question: given two pull requests, are these similar?
We show that using neural models, researchers are able to avoid feature engineering, because these models can be trained on raw data.
Furthermore, neural models have the potential to outperform
traditional supervised machine learning models, due to being able to learn relevant features by themselves.
However, the power of neural models comes at a cost: optimizing the parameters of neural models and explaining neural models is difficult and training them is costly.
We, therefore, recommend researchers to take into account well performing neural architectures in other domains, such as natural language processing, before creating novel architectures.
Furthermore, it is therefore important to include a less costly baseline when using neural models in research, to show that the power and thereby the cost of neural models is justified.","deep learning; mining software repositories; pull requests","en","master thesis","","","","","","","","","","","","Computer Science | Software Technology","",""
"uuid:636bdd6f-90be-4163-bf12-3f935585cf4e","http://resolver.tudelft.nl/uuid:636bdd6f-90be-4163-bf12-3f935585cf4e","Probabilistic downtime analysis for complex marine projects: Development of a modular Markov model that generates binary workability sequences for sequential marine operations","Bruijn, Willem (TU Delft Civil Engineering and Geosciences; TU Delft Hydraulic Engineering)","Jonkman, Sebastiaan N. (mentor); van Gelder, Pieter (mentor); Morales Napoles, Oswaldo (mentor); Hendriks, A.J.H. (mentor); Delft University of Technology (degree granting institution)","2017","A complex marine project consist of series of operations, with each operation subject to a predefined operational limit and duration, depending on the equipment being used. If actual weather conditions exceed the operational limit, then the operation cannot be executed and hence downtime occurs. It is up to contractors, such as Boskalis, to accurately estimate the expected downtime in order to determine the project costs. Recently, anew tool has been developed to make downtime assessments by using the Markov theory: the so-called `Downtime-Modular-Markov model' (DMM-model). It abstracts the actual metocean conditions by stochastically producing binary `workability sequences' for each operation, where a distinction has been made between workable and non-workable states given an operational limit. The Markov statistics of the model are based on the characteristics of the observed metocean conditions. Complex marine project simulations are realizable based on these statistics. The purpose of this thesis is to develop the DMM-model for which a software-testing process is applied. In the verification phase the concept and the code of the model are checked on correctness, consistency and completeness. Subsequently, the validation phase addresses to the quality of the model. Three different metocean datasets are used to test the model and its individual modules whether they perform sufficiently accurate. The most important findings of both phases are tackled in the improvement \& extension phase. Adjustments made during this last phase bring the DMM-model to a new state-of-the-art. It is recommended for further study to conduct an uncertainty analysis (quantify the model and parametric uncertainty).","Complex marine project; operation; operational limit; downtime; Markov theory; Downtime-Modular-Markov model; workability sequences; simulation; software-testing; verification; validation; improvement; extension","en","master thesis","","","","","","","","2018-10-20","","","","","",""
"uuid:778183c3-b86d-4a62-a06b-c8a78163d098","http://resolver.tudelft.nl/uuid:778183c3-b86d-4a62-a06b-c8a78163d098","Developing design capabilities in a software SME","Bastiaansen, Sjoerd (TU Delft Industrial Design Engineering)","Govers, Pascalle (mentor); Price, Rebecca (mentor); Machielsen, Tjeerd (mentor); Delft University of Technology (degree granting institution)","2017","The study investigated a small- to medium-sized enterprise (SME) that had expressed the interest to explore the potential benefits of developing design capabilities. Previously, CM had conducted a company-wide branding exercise and saw an opportunity to explore this further. During a 6-month period, the researcher was embedded at CM as a design innovation catalyst to understand what first steps the company could take and to help the firm take these steps. Through design workshops and knowledge sharing, the catalyst managed to improve the understanding of design and have employees that were more actively involved in the catalyst’s work change their behaviours.","design capabilities; Software development; Design thinking; Small- to medium-sized Enterprise","en","master thesis","","","","","","","Campus only","","","","","Strategic Product Design","",""
"uuid:a7e5a7be-902a-4536-8247-7ce83b26ab4b","http://resolver.tudelft.nl/uuid:a7e5a7be-902a-4536-8247-7ce83b26ab4b","On the Effect of Code Quality on Agile Effort Estimations: The Case of Shell","van Breemen, Jorden (TU Delft Electrical Engineering, Mathematics and Computer Science)","Bacchelli, Alberto (mentor); van Solingen, Rini (graduation committee); Essenius, Rik (graduation committee); Sawant, Anand (graduation committee); Delft University of Technology (degree granting institution)","2017","Agile software development has interested researchers for the last decade. Agile software development teams develop iteration sessions that often last weeks. During development, teams work on technical code and its content. Intuitively, more effort is required to implement new features in poorly constructed code with low quality. This study investigates if and how developers consider the quality of their code during their agile effort estimations. Furthermore, we investigate whether the accuracy of their estimations could increase if developers considered the quality of the code. This study is conducted in a large software development department, that is part of Royal Dutch Shell. We take a mixed method approach, where we interview nine developers and quality experts and mine the repositories of six agile development teams. Initially, we reviewed the existing importance measures of code quality during effort estimations, including how code quality is maintained. We also evaluate the impact of code quality on estimation accuracy.
Developers did not consider code quality high on the priority list during the estimation stage of development. Similarly, we did not find an empirical relationship between the quality metrics and effort estimations. Surprisingly, code quality only had minor effects on the accuracy of the effort estimations. Developers did often encounter quality issues in legacy code. However, overall our study shows that code quality is only of minor importance during agile effort estimations.
In this thesis, we examine developers’ perceptions of linters to increase our knowl- edge on these tools for JavaScript, the most widely used programming language in the world today. More specifically, we study why and how developers use ESLint, the most popular JavaScript linter, along with the challenges that they face while using the tool. We collect data with three different methods where we first interview 15 experts on using linters, then analyze over 9,500 ESLint configuration files and finally survey more than 300 developers from the JavaScript community. The combined results from these analyses provide developers, tool makers and researchers with valuable knowl- edge and advice on using and developing a linter for JavaScript.","JavaScript; Linters; Static Analysis Tools; Software Engineering","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:eb589b41-1bd6-4e87-8ccd-5288ffc4a011","http://resolver.tudelft.nl/uuid:eb589b41-1bd6-4e87-8ccd-5288ffc4a011","Modeling the Exception Flow in PHP Systems","den Braber, Tom (TU Delft Electrical Engineering, Mathematics and Computer Science)","Soltani, Mozhan (mentor); Finavaro Aniche, Maurício (mentor); Wijngaard, Merijn (mentor); Delft University of Technology (degree granting institution)","2017","The goal of this thesis is to learn how exception handling constructs are used by PHP developers. We present an approach for detecting the exception flow of a software system, based on the work of Robillard and Murphy (2003). We show the accuracy of this approach by evaluating the tool on a corpus of three different PHP systems. The approach is thereafter used to perform an empirical study on a corpus of 20 PHP systems. For each of these systems, we compute the exception flow and measure the number of exceptions that are encountered, how often exceptions are propagated before they are caught, by what type they are typically caught, and whether they are documented. The results show that many exceptions are propagated often before they are caught and that many are caught by subsumption. Another finding is that exceptions are often not documented, which in many cases is a violation of the Liskov Substitution Principle.","PHP; Static Analysis; Exception Flow; Software Quality","en","master thesis","","","","","","","","","","","","","",""
"uuid:3b7e2cc6-4810-44a9-a19a-3cc8a260d495","http://resolver.tudelft.nl/uuid:3b7e2cc6-4810-44a9-a19a-3cc8a260d495","Search-Based Test Data Generation for SQL Queries","Castelein, Jeroen (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","Finavaro Aniche, Maurício (mentor); Soltani, Mozhan (mentor); van Deursen, Arie (graduation committee); Bozzon, Alessandro (graduation committee); Bosman, Peter A.N. (graduation committee); Delft University of Technology (degree granting institution)","2017","Software testing is an important, well-researched field.
With the majority of modern-day applications using relational databases to manipulate their data, it is crucial that database interactions are tested as well.
This is a complex task to perform manually, and thus researchers have been attempting to tackle this problem by means of automated test data generation.
In their studies, they apply constraint-based techniques using SAT solvers to generate the test data.
However, these techniques have known limitations such as solving subqueries.
In this thesis, we present a novel search-based approach that uses a Genetic Algorithm to generate test data for SQL queries, which overcomes the limitations of previous research.
We provide an implementation of our approach, EvoSQL.
In our implementation, we instrument a real database to extract all the information necessary for the fitness function. By doing so, we support all queries using standard SQL syntax.
We evaluate our approach on 2,135 queries from 4 real-world systems, of which EvoSQL is able to cover over 96% fully.","Search-Based Software Testing; SQL; Database; Database Testing","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:d2a020e3-07b3-42c8-a926-0e0e2f7ed6f0","http://resolver.tudelft.nl/uuid:d2a020e3-07b3-42c8-a926-0e0e2f7ed6f0","Automating Valuations for Real-Estate","Wiersma, Ruben (TU Delft Electrical Engineering, Mathematics and Computer Science); Nguyen, Hung (TU Delft Electrical Engineering, Mathematics and Computer Science); Geenen, Alexander","Liem, Cynthia (mentor); Visser, Otto (graduation committee); Smulders, S.M. (graduation committee); Delft University of Technology (degree granting institution)","2017","As GeoPhy is developing its business model and looking into the future of automated valu- ation models (AVM), this project delivers a proof of concept of a system that automates the training, maintaining, and delivery of machine learning models for automated valuations. In order to achieve this goal, the situation and problem were first analysed. This resulted in an outline of the desired product and requirements in the form of a MoSCoW analysis. An important goal for this project was to incorporate streams of data from a stream processing platform (Apache Kafka) into a service that would train and update models automatically. The second goal for this project was to keep track of the changes in the data in order to detect significant changes in distribution (concept drift) of the target prediction value.
These subjects were studied in literature, reviewing existing and upcoming valuation prac- tices in real-estate, steps needed to perform machine learning tasks, architecture to support big data processing, and concept drift. This resulted in a design made up of four different components: An ETL and data processing component, a modelling component, a Kafka con- nector, and a client-facing API. An important part to ensure efficiency and scalability of the system is the implementation of concept drift: models are only retrained when the distribu- tion of the target training value has changed significantly.
These components use storage in the form of a Postgres database, disk storage and Elastic Search logs. The logs (on model performance and concept drift usage) can be interpreted through a Grafana dashboard, which is editable through its own GUI.
Finally, to test the success of the project, a testing plan was set up and the code was reviewed by an external group (SIG). The code achieved all the testing milestones and received a 4.5/5 in a mid-development review on maintainability. With this project, the concept of automated valuation models inside GeoPhy’s new architecture has been tested and proved and the project is ready to be further developed and used in practice.","Machine Learning; software architecture; Big Data; concept drift; Real-estate","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:c9486c18-aa92-4087-8e97-99d7fd8dc505","http://resolver.tudelft.nl/uuid:c9486c18-aa92-4087-8e97-99d7fd8dc505","Product design algorithm: A proposition to empower laymen users of 3D printing to create unique design files","Spaapen, Michiel (TU Delft Industrial Design Engineering; TU Delft Industrial Design)","Hoftijzer, Jan Willem (mentor); Sonneveld, Marieke (mentor); Delft University of Technology (degree granting institution)","2017","For many years 3D printing has been one of the most exciting promises in future technologies. One of the issues with the penetration of 3D printing technology is the required proficiency with creation software; and the lack of experience in design. This report describes the exploration into a novel means to empower users to create unique design files to 3D-print and in doing so aspires to increase the technology’s audience. The current users of the technology are mainly people with technical backgrounds or highly invested autodidact amateurs. The audience that is targeted with this project consist of creative, tech savvy early adopters; people who lack the skills but not the inclination. The idea proposed by this project was to find a compromise between freedom and ease of use, while maximising the perceived freedom and sense of authorship. The approach to achieve this goal is by the means of formalising a digital design process through an algorithm. By offering a set of instructions and options the user would be guided through the process. The objective is that the user experiences a successful DIY-type cycle with sense of genuine authorship over the outcome. It does so by combining several types of tools into a specific combination setting up a framework for other people to use for specific product types. It proves to be a multi-faceted problem consisting of: the algorithm; a user interface; a way to guide the user through the process called the Track; and guidelines to create an implementation of the framework, on a meta-level. Each of the facets is explored and combined to create the concept. After thorough analysis and ideation the concept proposal is the PDA (Product Design Algorithm)-framework. By making several prototypes and reviewing them, through quick user tests, a lot of insight was gained. This iterative process proved to be a productive means to get comprehension in the implementation of the proposal. This led to the creation of the final design case; Spectacle. Spectacle showcases the implementation of the framework with a full track, algorithm and user interface. It facilitates the creation of glasses and guides the user in specific steps through the process. By manipulating things like sliders, points and curves, the user forms the design of the glasses. It provides real-time feedback by displaying a representation both in 2D and 3D according to the specific step in the process. In some instances parameters are controlled directly and singularly and in others they form group for a more subjective feeling of control. It made use of augmented reality to combine map the model on the users face via a web-cam. The Spectacle was tested with group representing the target. Through observation; vocalising the thought process; and post use-interviews new insights were gained that were either implemented immediately wherever that was possible and otherwise included in the guidelines. This report suffices as an exploration into the world of creating specific algorithmic design tools. However this context is on the forefront of innovation and therefore constantly changing. While this project tries to make its recommendations as fundamental as possible it is likely that some things will change over time.","design; parameteric design; diy; software; application; 3D printing","en","master thesis","","","","","","","Campus only","","","","","","",""
"uuid:7c917480-5eb3-46e2-93be-07e95efa30fd","http://resolver.tudelft.nl/uuid:7c917480-5eb3-46e2-93be-07e95efa30fd","Investigating current state Security of OpenFlow Networks: Focusing on the control-data plane communications","Pors, Marlou (TU Delft Electrical Engineering, Mathematics and Computer Science)","Kuipers, Fernando (mentor); Dominguez, Francisco (mentor); Doerr, Christian (graduation committee); Delft University of Technology (degree granting institution)","2017","Software-Defined Networking (SDN) is the emerging paradigm that breaks vertical integration in networks, separating the network’s control logic from the underlying network devices such as routers and switches.
The decoupling of this data plane and control plane, there is need for a new communication channel which is used for the communication between the SDN controller and the network devices.
This channel is the so-called control channel and a popular protocol used over this channel is OpenFlow.
In this work we focus on the security of SDN while focusing on the control channel and the OpenFlow protocol. In example, we show several impersonation attacks and achieve denial-of-service by misusing the ARP protocol to generate a lot of OpenFlow traffic.
We also discuss how we can protect SDN against such attacks in order to improve SDN security.
This work has been performed at the IT security company Fox IT.
The requirements of the web portal have been established through interviews with the client and TU coach. During the process of development an agile methodology was used for the division of tasks and iteratively work towards the end product. From the eight sprints a fully functioning prototype has
been developed, which satisfies all the main requirements that had been requested by the client. The prototype is separated into three applications: a web interface for the user, a server for persistent data storage and a container for streaming and controlling the rendered scene by Exposure Render. These
applications cooperate to form the requested web portal.
Most companies don't actually own datacenters and servers anymore, they rent virtual machines from other companies. Such companies may be interested in accessing the Openflow capabilities of switches in the datacenter, but off course shouldn't they be allowed to push a rule that drops all traffic from a competitor. A Hypervisor can help out in these instances, allowing multiple tenants to use a physical network while guaranteeing that they can't influence each other.
This thesis presents a hypervisor for Openflow 1.3, allowing multiple tenants to use Openflow 1.3 features of without allowing them to influence each others traffic and allowing the network operator to hide network implementation details. A proof of concept implementation is also provided to test the ideas presented in this thesis.","SDN; Openflow; Hypervisor; Software Defined Networking; Delftvisor","en","master thesis","","","","","","","","","","","","","",""
"uuid:f30ded3b-7f35-4a93-af55-e1da122235f4","http://resolver.tudelft.nl/uuid:f30ded3b-7f35-4a93-af55-e1da122235f4","Hierarchical Abstraction of Execution Traces for Program Comprehension","Dreef, Kaj (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","van Deursen, Arie (mentor); Jones, James (mentor); Delft University of Technology (degree granting institution)","2017","Understanding the dynamic behavior of a software system is one of the most important and time-consuming tasks for today’s software maintainers. In practice, understanding the inner workings of software requires studying the source code and documentation and inserting logging code to map high-level descriptions of the program behavior with low-level implementation, i.e., the source code. Unfortunately, for large codebases and large log files, such cognitive mapping can be quite challenging. To bridge the cognitive gap between the source code and detailed models of program behavior, prior software-execution mining research primarily focused on reducing the size of the low-level instruction execution traces. In contrast, in this thesis we propose a generic approach to present a semantic abstraction with different levels of functional granularity from full execution traces. Our approach mines multiple execution traces to identify frequent behaviors at multiple levels of abstraction, and then analyzes and labels individual execution traces according to the identified major functional behaviors of the system. To validate our technique, we conducted a case study on a large-scale subject program, Javac, to demonstrate the effectiveness of the mining result. Furthermore, the results of a user study demonstrate that our technique is capable of presenting users with a high-level comprehensible abstraction of execution behavior.","Dynamic Analysis; Visualization; Hierarchical Abstraction; Labeling; software","en","master thesis","","","","","","","","","","","","","",""
"uuid:67786692-8de1-4a06-aa41-d55b40c76dbf","http://resolver.tudelft.nl/uuid:67786692-8de1-4a06-aa41-d55b40c76dbf","SysML to SLIM transformation methodology: Connecting model-based space systems engineering and model-based software engineering","van der Gaag, J.T.","Guo, J. (mentor)","2017",": Identifying faults early on in the design phase drastically reduces the cost of fixing them and can even prevent the loss of a spacecraft. The Correctness, Modeling and Performance of Aerospace Systems (COMPASS) project created a toolset to approach the design of systems with a model-based angle, specifically critical on-board systems for the space domain. The COMPASS toolset requires an input model written in the high-level System-Level Integrated Modeling (SLIM) language. Systems engineers working with a Model-Based Systems Engineering (MBSE) approach commonly use a graphical modeling language to model systems on an architectural level. One of modeling languages most used for this purpose is Systems Modeling Language (SysML). For optimal use of the COMPASS toolset in an MBSE design project, a smooth transition between SysML and SLIM is desired. There is currently a gap between these two languages is present. The challenge lays in the fact that systems engineers using a graphical modeling language are not software engineers, which limits the ability to model the architectural SysML model in SLIM. This thesis presents a new developed methodology, the SysML to SLIM Transformation Methodology (SSTM), to develop a space SysML model which is able to automatically transform a SysML model into a SLIM model, ready to be imported by the COMPASS toolset. This allows a systems engineer without software engineering knowledge to model in a familiar modeling environment, but adds the ability to directly run tests on the model using the COMPASS toolset thus giving valuable insight into the behavior, limitations and possible errors of the system under development. Ultimately reducing development time and cost. A methodology based on the Object-Oriented Systems Engineering Method (OOSEM) method is created that introduces a set of custom stereotypes. Modeling with these stereotypes allow the transformation tool to automatically transform the SysML model into a SLIM model. The open-source, Eclipse-based modeling environment, Papyrus is used to model the SysML model. A profile with the required stereotypes is developed which can be imported and used in Papyrus projects. The transformation tool, written in Java, has a simple Graphical User Interface (GUI) to load the SysML model and save it as a SLIM model. Two experiments were executed and are presented in this thesis. Verifying the SSTM methodology and transformation tool was the main focus point of the first experiment in which a battery-sensor system was modeled. A second experiment was performed modeling an Attitude Determination and Control System (ADCS) case study to test the possibilities and limitations of modeling a complex space related system and identifying improvement points for the SSTM methodology. The first experiment successfully verified that using the SSTM methodology a SysML model can be developed without significantly more effort than to develop any other SysML model. It also verified that the tool successfully transformed the SysML model into a correct SLIM model that can be used in the COMPASS toolset. The second experiment, modeling an ADCS case study, showed that a simple space system can successfully be developed with the SSTM methodology. The case study also identified important improvement points for the SSTM methodology to model complex space systems. The experiments performed show that the SSTM methodology and transformation tool together bridge the gap between SysML and SLIM models and can be analyzed in the COMPASS toolset. This can be performed without significantly more modeling effort and in-depth programming knowledge. At this stage the SSTM methodology and transformation tool are ready to be used on component-based systems with event-based error models. Improvements in the SSTM methodology are proposed. The improvements will increase the maturity of the SSTM methodology and allow the modeler to model complex space systems SysML models with more ease and with less programming knowledge.","Model-Based Systems Engineering; Model-Based Embedded Software Engineering; SysML; SLIM; Transformation; Methodology; COMPASS","en","master thesis","","","","","","","","","Aerospace Engineering","Space Systems Engineering","","","",""
"uuid:d8f95cd0-f243-4989-89f3-a8a262b1bfd6","http://resolver.tudelft.nl/uuid:d8f95cd0-f243-4989-89f3-a8a262b1bfd6","Automated Software Testing of JavaScript Web Applications","Rogalla, M.J.","van Deursen, A. (mentor)","2017","Modern software is becoming more and more complex and manual testing cannot keep up with the need for high-quality reliable software: often due to the complexity of event-driven software, manual testing is done. This comes with many disadvantages in comparison with automated testing. The increased importance of having a secure, reliable online presence requires testing of JavaScript web applications. This thesis explores the current state of Automated Testing for JavaScript web applications, presents a new Automated Testing Framework and gives an outlook on future research. It intends to resolve some of the complexity issues to allow for automated testing.","Automated Software Testing; Model; Model Learning; Event-Sequence; JavaScript; Web Applications; Symbolic Execution; Software Testing; Testing Framework; Blue-Fringe; Hubble; SymJS; Fujitsu","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Technology","",""
"uuid:2735d183-7e2e-4b71-b77d-6e4a15147da1","http://resolver.tudelft.nl/uuid:2735d183-7e2e-4b71-b77d-6e4a15147da1","What trip are you looking for?","Koster, Martin (TU Delft Electrical Engineering, Mathematics and Computer Science); Arkesteijn, Youri (TU Delft Electrical Engineering, Mathematics and Computer Science); Meuleman, Mathias (TU Delft Electrical Engineering, Mathematics and Computer Science)","Epema, Dick (graduation committee); Visser, Otto (graduation committee); van der Helm, Sando (mentor); Delft University of Technology (degree granting institution)","2017","","Maintainable; Extandable; Software Quality; Design","en","bachelor thesis","","","","","","","","2017-07-26","","","","","",""
"uuid:06d80709-ad0b-4f9b-a1cc-65634cc42d00","http://resolver.tudelft.nl/uuid:06d80709-ad0b-4f9b-a1cc-65634cc42d00","WatchDog For IntelliJ: An IDE Plugin To Analyze Software Testing Practices","Levaja, I.","Zaidman, A.E. (mentor); Beller, M.M. (mentor)","2016","Software testing is as old as software development itself – they could not exist one without the other. However, are they equally important? Do software developers devote an equivalent amount of time to both produce software and to test it? An ongoing study of the TestRoots project aims to examine and improve the state of the art of software testing and answer those questions, by observing developers’ everyday behavior. In order to support this effort, we evolved WatchDog, a single-platform software, to become the scalable, multi-platform and production-ready tool which assesses developer testing activities in multiple integrated development environments (IDEs). We further used WatchDog platform to perform a small-scale study in which we examined testing habits of developers who use IntelliJ IDEA and compared them to those of the Eclipse IDE users. Finally, we were able to confirm that IntelliJ users, similarly to the Eclipse users, do not actively practice testing inside their IDEs.","software engineering; software testing; data analysis; developer testing","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","The Software Engineering Research Group","",""
"uuid:5d5fae8c-0f78-4939-91c1-a9381a8b71bf","http://resolver.tudelft.nl/uuid:5d5fae8c-0f78-4939-91c1-a9381a8b71bf","Impact of non-gravitational forces on GPS-based precise orbit determination of low Earth orbiters","Girardin, V.E.J.","Visser, P. N. A. M. (mentor); Jäggi, A. (mentor); Arnold, D. (mentor)","2016","The modelling of non-gravitational forces acting on a satellite (especially a GNSS satellite) started half a century ago. Their modelling to low Earth orbiters (LEOs), however, are more recent because of the dominating atmospheric drag, the modelling of which requires the precision of recent atmospheric models. However, this is not an issue for precise orbit determination (POD), as of the method used to compute the reduced-dynamic orbit. The method is using piecewise constant accelerations (PCA), which are absorbing the non-conservative forces that are not modelled. One expects the implementation of non-gravitational forces to reduce the amplitude and mean values of the PCA of the satellite's reduced-dynamic orbit. In this thesis assignment, four non-gravitational forces has been considered large enough for improving the POD of GPS-based LEO using the Bernese GNSS Software: the aerodynamic forces, the solar radiation pressure and the reflected and emitted Earth radiation pressure. For each force, a modelling method has been chosen and implemented in the Bernese GNSS Software. The impact of the modelled non-gravitational forces has been evaluated with Swarm and GRACE LEOs with all the available validation and comparison methods: PCA, accelerometer data, GPS observation fit, satellite laser ranging observations and K/Ka-band ranging measurements. As expected, the implementation of all the forces reduces the standard deviation of the PCA in each direction, for both Swarm C (-8% in radial, -56% in along-track and -22% in cross-track) and GRACE A (-73% in radial, -75% in along-track and -20% in cross-track). Regarding the mean values for Swarm C a large reduction is observed in: along-track (-129%) and cross-track (-97%) direction, but the mean acceleration in radial direction increase by a few percent (7%). For GRACE A, the mean values of the PCA are reduced along each orbital axis (-88% in radial, -112% in along-track and -23% in cross-track). In addition to the PCA reduction, the precision of the reduced-dynamic orbit has also been improved by the modelling of the non-gravitational forces. In terms of values the a posteriori standard deviation of unit weight of 1.96 mm has been reduced to 1.93 mm for Swarm and from 2.3 mm to 1.45 mm for GRACE. Finally, different parameterizations of the aerodynamic forces have been carried out in order to determine the best atmospheric model and gas-surface interaction algorithm for LEO POD improvement. The impact of a horizontal wind model has been tested. More specifically: modelling the gas-surface interaction with Goodman's model using the DTM2013 atmospheric model with the horizontal wind correction HWM14 have shown the best results in LEO POD.","Swarm; Bernese GNSS Software; GRACE, K/Ka-band ranging (KBR); LEO POD; Non-gravitational forces modelling; piecewise constant accelerations (PCA); reduced-dynamic orbit; Satellite Laser Ranging (SLR)","en","master thesis","","","","","","","","","Aerospace Engineering","Astrodynamics and Space Missions","","","",""
"uuid:0b8cae0c-d9fe-4b90-897e-45cd49299b2f","http://resolver.tudelft.nl/uuid:0b8cae0c-d9fe-4b90-897e-45cd49299b2f","User-centered Prioritization: A 2-dimension approach for evaluating the user value of backlog topics","Garcia Mateo, J.","Calabretta, G. (mentor); van der Hoog, W.G. (mentor)","2016","User-centered Prioritization is a method for assessing the user value of the different topics present in the product backlog agile software development teams work with. It not only puts the users in the spotlight during this decision-making procedure by basing it on the benefits achieved and the level of fulfilment of user needs, but also provides a more consistent, fact-based and transparent process over priority assignation. User-centered Prioritization determines user value through the two dimensions of customer satisfaction and impact. Customer satisfaction is considered the level of contentment of customer expectations regarding the product or service and the fulfilment of user needs. On the other hand, impact can be seen as the relevance of a specific feature for customers, evaluated through usage and adoption rates. The User-centered Prioritization is composed by three tools. The Satisfaction Questionnaire and the Satisfaction Analysis serve for the analysis and determination of value regarding customer satisfaction. The third tool, Impact Grid, is used for the assessment of impact through the expected usage and adoption of the different features, as well as the combination of both dimensions for an effective appraisal of the overall user value that guides the subsequent assignation of priority.","design; software; prioritization; insights; user value; product backlog","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Product Innovatie Management","","Strategic Product Design","",""
"uuid:3b14e941-f1b3-40a0-95d4-35e7fae5b578","http://resolver.tudelft.nl/uuid:3b14e941-f1b3-40a0-95d4-35e7fae5b578","Scaled agile maturity model","Chandrasekaran, R.M.","Janssen, M.F.W.H.A. (mentor); Ubacht, J. (mentor); Warnier, M.E. (mentor)","2016","In today’s world agile software development has been embraced more and more in software service industry. Though the agile practices have gained widespread popularity in the recent years, there are quite a number of concerns in scaling the agile practices from team level to the entire enterprise. Few frameworks such as Scaled Agile Framework (SAFe, Disciplined Agile Delivery (DAD) and Large Scale Scrum (LeSS) have been developed to address these concerns in scaling agile practices. Although these frameworks provide a template in scaling agile in large enterprises, currently there is a lack of a holistic method which would help them in implementing scaled agile practices or adapting to scaled agile software development. Before or after adopting such a framework, organizations require a structured model for assessing the level of completeness of adoption or find areas of improvements in the scaled agile practices, which would also help them in developing a roadmap for further progress and initiatives. This research offer guidance for the IT organizations towards scaled agile software development by providing a maturity model. This maturity model is composed of six stages as rows and five scaled agile principles in columns, in which each stage and column forms a matrix of scaled agile practices. Each of the practices consists of indicators which help in assessing the level of adoption of the practices. Once having identified the lacking criteria in the adoption of scaled agile practices, organizations can start focusing on the lacking criteria and other areas of improvement. The research also strongly suggests the collaborative spirit in adaptation of scaled agile practices, by using this model as a discussion tool in the team, program and portfolio levels of scaled agile environment. Future research in this arena would aim at researching on the dynamics of emergence on scaled agile practices and the notion of such an emergence on multitude of stakeholders involved in a scaled agile process.","scaled agile framework; agile software development; maturity model; scaled agile practices; ambidexterity","en","master thesis","","","","","","","","2017-08-17","Technology, Policy and Management","System engineering policy analysis and management","","","",""
"uuid:5e3dfbfe-19a8-4ac4-b9b6-117aab0737bf","http://resolver.tudelft.nl/uuid:5e3dfbfe-19a8-4ac4-b9b6-117aab0737bf","Providing End-to-end Bandwidth Guarantees with OpenFlow","Krishna, H.","van Adrichem, N.L.M. (mentor)","2016","QoS (Quality of Service) control is an important concept in computer networking as it is related to end user experience. End-to-end QoS guarantees, in particular, can give firm guarantees to end hosts. Unfortunately, it has never actually been used on the Internet since it was deemed too complicated. With the emergence of Software Defined Networking (SDN) and OpenFlow as its most popular standards, we have an opportunity to re-introduce the QoS control concept. The centralized nature and programmability of OpenFlow allow more flexible and more simple QoS control. In this thesis, we propose an end-to-end bandwidth guaranteeing model for OpenFlow. The primary design consideration of the model is to allow QoS flow to send more than its guaranteed rate. To further maximize the overall network utilization, best-effort flows are allowed to use any unused bandwidth in the network. Bandwidth borrowing concept is employed to achieved this. To ensure that it will not affect the guaranteed bandwidth for the QoS flows, we analyze the reliability of the bandwidth borrowing concept in Linux HTB, which is used as the underlying mechanism of OpenFlow queue. From the simulations, we found that the borrowed bandwidth is returned instantly when a QoS flow requires the bandwidth. Thus, it is possible to guarantee bandwidth and maximize bandwidth utilization at the same time. We also explore the possibility of using OpenFlow meter table for traffic aggregation. The aggregation only puts overheads in the first switch, but no other complexities added in the subsequent switches. Therefore, it solves the scalability problem which commonly associated with end-to-end QoS guarantees.","Quality of Service; Software-defined networking; OpenFlow","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Engineering","","Telecommunications and sensing system","",""
"uuid:1254d20e-2de1-406f-8699-10762a2b9902","http://resolver.tudelft.nl/uuid:1254d20e-2de1-406f-8699-10762a2b9902","BEPStore: The reverse App Store","Heemskerk, B.; Kooyman van Guldener, W.; Sluis, S.","Hendriks, E.A. (mentor)","2016","FeedbackFruits is a company that offers an online learning solution to help innovate education. Their platform is used on a daily basis by teachers and students to improve their learning experience. When using the platform, the users often think of valuable feedback and new features they would like to see added. This report describes the process of designing and implementing a platform for the purpose of collecting this feedback in a central location and streamlining the process of acting upon it.","community engagement; communication; distributed; requirements; software development; open source; github; feedbackfruits","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","Computer Science","",""
"uuid:2e8cf03e-2353-4770-be9b-686f3cf12229","http://resolver.tudelft.nl/uuid:2e8cf03e-2353-4770-be9b-686f3cf12229","Ethernet Circuit Failure Detection over Aggregated Trunks","Deshmukh, S.","Kuipers, F.A. (mentor); Van der Pol, R. (mentor)","2016","With the increase in data-intensive research in recent years, the Ethernet circuit, which is a high speed point-to-point connection, can be used for transmitting large amounts of data between sites. Customers use the trunk port to connect to the operator network. It allows multiple Ethernet circuits to share the same trunk port by using the trunk and results in the efficient utilization of the bandwidth of the port. It distinguishes each (VLAN) service on the basis of VLAN identifiers. When redundancy needs to be offered in the network using two trunk ports, detecting an individual Ethernet-circuit failure over the trunk and load balancing per-flow traffic between active trunks is not possible because the existing technique, namely link aggregation, has limitations. Link aggregation does not support per-VLAN failure detection and must only be setup between directly connected network elements. Hence, it cannot be used for end-to-end failure detection when intermediate network elements are involved. In this thesis, alternative Layer 2 technologies are identified for detecting per-Ethernet circuit failure over trunk and per-flow traffic load balancing. Both traditional networking-based as well as software-defined networking (SDN)-based approaches are investigated to solve the aforementioned problems, and the findings are summarized. An SDN-based design to solve both failure detection and load balancing problems is proposed. Furthermore, the proposed solution is validated using proof of concept (POC) implementation. Finally, the performance of the POC implementation is evaluated and the findings are summarized along with recommendations for future work. Our findings reveal that existing Layer 2 technologies lack support in successfully detecting per-Ethernet circuit failure over trunk and per-flow traffic load balancing between active trunks. However, an SDN-based approach can successfully be deployed to solve both the problems.","ethernet circuit; software-defined networking; link aggregation","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","",""
"uuid:8174a80b-b048-47df-b4a6-9e48d83bae96","http://resolver.tudelft.nl/uuid:8174a80b-b048-47df-b4a6-9e48d83bae96","MetaC - Embedded Software specific extensions for the C programming language","Stolwijk, A.C.","Langendoen, K.G. (mentor)","2016","Embedded Software appears in a variaty of systems and products. The software for these systems have special requirements. Firstly, embedded software needs to be very robust as it is usually deeply tucked away and not very visible to the user. Secondly, embedded systems have dedicated hardware the software has to work on, and finally embedded systems can have real-time contraints. The C programming language is the most popular programming language for these kinds of systems and meets the requirements. Two disadvantages of C are that it is hard to create higher-level abstractions to solve the problem at hand of the programmer, and it is easier to create bugs compared to more modern languages. To solve these disadvantages we propose the MetaC language, which extends the C language with domain specific extensions tailored for embedded systems. The MetaC compiler compiles MetaC code, including the C language and the extensions, to C code. What extensions can be designed to be helpful for embedded software developers? MetaC implements the C language and adds the following extensions: 1) A bit-fields extension for declaring a bit-fields layout with names and use those names to manipulate the bits instead of using logical operators. 2) State machines to encode the state behavior of a system with new syntax and semantics. That makes it possible to generate state machine diagrams. 3) A concurrency extension for communication between concurrent processes with channels using CSP-style semantics. The extension can generate a model for the PAT model checker. 4) An error handling extension adding error handling constructs that are missing in the C language. Functions can return a new type that indicates the function can return an error. The type system forces programmers to handle the errors of those functions. The MetaC compiler is implemented using the Spoofax Language Workbench, which also provides an Integrated Development Environment (IDE) with common IDE features. The design goals are to implement the extensions in a modular way, to allow separate development of extensions, and to integrate into C as much as possible to give a C feel to the extensions. A BaseC module implements the C compiler while separate modules implement the extensions. The modules are composed into the final MetaC compiler. What are problems implementing MetaC with Spoofax? To reach the design goals several issues had to be solved implementing MetaC with Spoofax. These issues include determining the precedence of new expression operators, composition of scoping rules for new language constructs, and reusing existing name binding rules for new extensions.","Spoofax; C programming language; extensions; domain specific languages; embedded software","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Embedded Software","","","",""
"uuid:3d0521b7-7fd4-40ab-8bc4-7109267b5ba4","http://resolver.tudelft.nl/uuid:3d0521b7-7fd4-40ab-8bc4-7109267b5ba4","fine-GRAPE: Fine-Grained APi usage Extractor An Approach and Dataset to Investigate API Usage","Sawant, A.A.","Bacchelli, A. (mentor)","2015","An Application Programming Interface (API) provides a specific set of functionalities to a developer, with the aim of enabling reuse. APIs have been investigated from different angles such as popularity usage and evolution, to get a better understanding of their various characteristics. For such studies software repositories are mined for API usage examples. However, the mining algorithms used for such purposes do not take type information into account, thus making the results imprecise. In this thesis, we aim to rectify this by introducing fine-GRAPE, an approach that produces fine-grained API usage information by taking advantage of type information while mining API method invocations and annotations. fine-GRAPE establishes a connection between a method invocation and the class of the API to which the method belongs. By means of fine-GRAPE, we investigate API usages from Java projects hosted on GitHub. We select five of the most popular APIs across GitHub Java projects and collect historical API usage information by mining both the release history of these APIs and the code history of every project that uses them. We use the resulting dataset to perform four separate analyses. The first measures the lag time of each client by leveraging the version information that has been collected. We see that in most cases clients do not upgrade the version of the API that they are using to the latest version. The consequence of this is that the lag time that each client displays is quite high. The second study investigates the percentage of API features that are used by using the type information in the dataset. The results of this study show that a very small percentage of an API is actually used by clients in the real world. Our third study aims to show the relation between popular features and software quality. Finally, the fourth study analyzes the reaction of clients to the deprecation of API artifacts. Our deprecation study shows that most clients do not really react to deprecated entities.","API; mining software repositories; deprecation; popularity","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:35e27809-b498-4e69-b846-27108eb4c2e1","http://resolver.tudelft.nl/uuid:35e27809-b498-4e69-b846-27108eb4c2e1","Maturity model of capabilities to prepare for implementing agile software development","Mercan, S.","Janssen, M.F.W.H.A. (mentor); Ubacht, J. (mentor); Rook, L. (mentor)","2015","","agile software development; capabilities; maturity model; stages model","en","master thesis","","","","","","","","","Technology, Policy and Management","ICT","","Management of Technology","",""
"uuid:1cdca7a1-a440-4dce-9fce-859fd9701839","http://resolver.tudelft.nl/uuid:1cdca7a1-a440-4dce-9fce-859fd9701839","Accelerating aircraft design using automated process generation: An experimental architecture for aircraft design software","Ramakers, M.A.Y.","Vos, R. (mentor); Hoogreef, M. (mentor)","2015","The aircraft industry has seen many evolutionary changes in the past decades. Since the Boeing 707 however, the general shape and configuration of transport aircraft have remained similar and so has the aircraft design process. Since the aviation community is about to face a revolutionary breakthrough featuring new aircraft concepts like blended-wing body aircraft and Prandtl planes it is time to adapt the design process as well. The aircraft design process may have been extended by computational novelties and has been largely computerized, but the fundamental design process is still similar to what is used for conventional aircraft. This research investigates the effect of the design process on the design outcome and strives to find a method of automatically generating an aircraft design process given a set of computational modules, initial values and design goals. A software architecture based on a strict separation of components and process modelling approach is proposed. A framework based on this architecture is developed in which design parameters and computational modules are modelled as nodes in a graph. A subset of the conceptual aircraft design process is simplified and implemented. Twelve Algorithms are proposed to perform the automated ordering of the computational modules. The ordering is based on the module run time, estimated impact of the module on the design or the current state of the aircraft design, among others. The design process used in the Initiator aircraft design software is used as a benchmark. Two test cases are formulated, representing Class I and Class II design problems. Two additional Class II test cases are formulated to rule out the impact of the scheduling overhead by simulating the module runtime of fully-featured modules. Each test case is then solved by the proposed algorithms. The result is a system capable of generating a feasible (but sub-optimal) design process out of a set of stand-alone computational modules. This is achieved with no prior knowledge on the design process. From the test cases it can be concluded that the design outcome is not affected by the design process order that is employed, if the used set of modules is not changed. It is also concluded that the classical, fixed design process outperforms the design processes generated by any of the algorithms for Class I and Class II design problems by 20-40%. From the designed algorithms, the algorithm that orders modules based on the expected change of re-evaluating the module versus the expected run time of the module, performs best for the complex (Class II) design case. The impact of the scheduling overhead is shown to be negligible. Since the system is capable of generating feasible, but sub-optimal design processes, it is recommended that it be used as a tool to generate new design processes for unconventional aircraft configurations. Finally it is recommended that the process modelling based and separation of components based software structure that is presented is adopted for a next version of the Initiator.","aircraft design; process modelling; automated; software","en","master thesis","","","","","","","","","Aerospace Engineering","AWEP","","FPP","",""
"uuid:a5d54afd-277f-4895-85eb-eedbcf45aa0a","http://resolver.tudelft.nl/uuid:a5d54afd-277f-4895-85eb-eedbcf45aa0a","Management Guideline for Software Development Projects (Public Version)","Sengur, S.","","2015","The needs of companies are evolving and becoming more and more demanding each year, which result in more complex stakeholder relations. The complexity of the projects increases as well. For instance, stakeholders do not always have the same sense of urgencies and their expectations from the same project might differ. Such variety in demand urges project managers to opt for a new; more broaden management approach that includes more human type of issues as well as hard skills. One area this situation is observed in contemporary projects is software development projects with complex stakeholder environment, where the project managers have responsibility to deliver high quality end delivery at the same time to meet the demands of stakeholders with different sense of urgencies. This study aims for answering the research question “How can software projects having unclear requirement definitions and stakeholders with different sense of urgencies can be managed successfully?” The research is proposing a structured set of methodologies with a guideline to professionals and the researchers to manage software development projects having unclear requirement definition and stakeholders with different sense of urgencies.","software development; different sense of urgencies; unclear requirement definition","en","master thesis","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","Systems Engineering, Policy Analysis and Management","",""
"uuid:5db7a841-ae07-491a-86a3-630ce267c265","http://resolver.tudelft.nl/uuid:5db7a841-ae07-491a-86a3-630ce267c265","Stakeholder Management Methodologies for Software Development Projects Having Complex Stakeholder Environment with Different Sense of Urgencies","Sengur, S.","","2015","The needs of companies are evolving and becoming more demanding each year, which resulted in more complex stakeholder relations. Stakeholders do not always have the same sense of urgencies and their expectations from the same project might differ. Although current literature suggests successful methodologies for software development projects, it remains inadequate when different sense of urgencies take place among the stakeholders. This creates the risk of developing negotiated nonsense software. There is a need of a more broaden approach where human types of issues are involved together with the traditional project management methods. This research aims to propose a set of stakeholder management methodologies that could be implemented in a software development project environment where different sense of urgencies exist. Suggested methodologies are validated by expert opinion.","stakeholder management; different sense of urgencies; software development","en","master thesis","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","Systems Engineering, Policy Analysis and Management","",""
"uuid:ff7acb60-a3e9-4f72-9c8d-bc65398d8d6a","http://resolver.tudelft.nl/uuid:ff7acb60-a3e9-4f72-9c8d-bc65398d8d6a","Empirical Software Linguistics: An Investigation of Code Reviews, Recommendations and Faults","Hellendoorn, V.J.","Bacchelli, A. (mentor)","2015","Communication is fundamental to human nature and underlies many of its successes as a species. In recent decades, the adoption of increasingly abstract software languages has supported many advances in computer science and software engineering. Although in many regards distinct from natural language, software language has proven surprisingly similar to it as well and has been studied successfully using natural language models. Recent studies have investigated this ""naturalness"" property of software in relation to a variety of applications including code completion, fault detection, and language migration. In this thesis, based on three research papers, we investigate three main aspects of software naturalness. Firstly, we investigate the relation between perceived (un)naturalness of source code (according to the statistical model) and the reaction to such code by software developers. In open-source projects, we find that those contributions which contain code that (statistically speaking) fits in less well are also subject to more scrutiny from reviewers and are rejected more often. Secondly, we investigate an application of highly predictable code: code completion. Previous work had evaluated the performance of language models in this application in isolation; we compare the language model approach to a commonly used code completion engine. We find that it compares favorably, achieving substantially higher accuracy scores. In particular, a combination of the two approaches yielded the best results. Finally, we investigate instances of highly unpredictable code in order to automatically detect faults. We find that buggy lines of code are substantially less predictable, becoming more predictable after a bug is fixed. Our bug detection approach yields performance comparable to popular static bug finders, such as FindBugs and PMD. Our results further confirm that statistical (ir)regularity of source code from a natural language perspectives reflects real-world phenomena.","software linguistics; software engineering; fault detection; code completionc; code review","en","master thesis","","","","","","","","2015-08-18","Electrical Engineering, Mathematics and Computer Science","Computer Science","","Software Engineering","",""
"uuid:7179edff-b498-4c3b-900d-62d61f8665df","http://resolver.tudelft.nl/uuid:7179edff-b498-4c3b-900d-62d61f8665df","Using Github Profiles in Software Developer Recruitment and Hiring","Nagaram, A.R.","Hauff, C. (mentor)","2015","Social coding platforms can provide initial understanding about the skills exhibited by the developers on these platforms. In contexts where candidates social profile information is useful for recruiting software developers, the information regarding the developers on these platforms can be leveraged by the recruiters with some software knowledge. However, recruiters have to put many efforts in inferring about a developer skill on social coding platforms. In this thesis, we investigate on providing relevant information regarding software developer capabilities on a social coding platform to the recruiters. We used GitHub as our social coding platform for this purpose. We explored regarding, the attributes to use for indicating the skills exhibited by a developer on GitHub. We also investigated GitHub as a resource containing some potential software developer candidates by recommending GitHub developer profile, solely based on skill set requirements of job advertisements. Our results indicate that the generated developer skill profiles have a valid set of attributes when combined, to indicate the regarding three skills exhibited by a software developer on GitHub. However, the generated profile was slightly preferred by the technical recruiters because of the profile's complexity in understanding and incompleteness. In the investigation of recommending developer profiles to suit job advertisement requirements our recommendation strategy could only achieve a precision of 0.39 on average and an Normalized Distance Based Performance Measure (NDPM) ranking accuracy value of 0.43 on average.","GitHub Data; Software Job Advertisements; Content-Based Recommendation","en","master thesis","","","","","","","","2015-08-21","Electrical Engineering, Mathematics and Computer Science","Computer Science","","Software Technology","",""
"uuid:5b91376a-83f3-4993-81c2-2f2a2743a7cb","http://resolver.tudelft.nl/uuid:5b91376a-83f3-4993-81c2-2f2a2743a7cb","Requirements Engineering Practices in Global Software Engineering Organizations: A Study in the Banking Industry","Reza, A.Y.","Van Solingen, R.V.S. (mentor); Bacchelli, A.B. (mentor); Hidders, J.H. (mentor)","2015","In this thesis we report on our investigation of requirements engineering (RE) practices and challenges in global software engineering (GSE) settings. We conducted a literature survey and a series of interviews/surveys to reach our goal. The subject of the research is the banking industry in the Netherlands actively involved in GSE. More specifically, the goal of this research is to find out what banking organizations have learned from RE practices when used in GSE settings. Specifically, the project investigates how GSE teams handle RE problems especially in the beginning of a project and attempts to identify the solutions in used in practice to deal with such challenges. The overall conclusions are that the use of liaison officers, the use of the online collaboration tools, and the use of a transparent RE process are the common practices that are used by the banks in the Netherlands to overcome their RE challenges in GSE project settings.","Requirements Engineering; Global Software Engineering; Organizations","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Science","","Information Architecture","",""
"uuid:094c5406-3784-4a9d-b6c8-ae10f88ccd28","http://resolver.tudelft.nl/uuid:094c5406-3784-4a9d-b6c8-ae10f88ccd28","An Extensible Toolkit For Real-Time High-Performance Wideband Spectrum Sensing","Bruinsma, W.P.; Hes, R.P.; Kroep, H.J.C.; Leliveld, T.C.; Melching, W.M.; Aan de Wiel, T.A.","Leus, G. (mentor); Ariananda, D.D. (mentor); Chepuri, S.P. (mentor)","2015","This document describes the design process of a software toolkit to perform high-performance wideband spectrum sensing. A prominent application of this is Cognitive Radio, a technique that aims to make more efficient use of the available radio spectrum. An extensive theoretical analysis will be performed. Various non-uniform sampling techniques will be discussed, such as coprime and circular sparse sampling. An algorithm to reconstruct the PSD of sub-Nyquist sampled signals will be developed and a detection algorithm which uses this PSD will be proposed. This analysis will be utilised to implement an extensible software toolkit written in Python. The software architecture and various design patterns that were utilised to structure the toolkit will be described and its quality and performance will be analysed. The hardware used for data acquisition, a USRP N210, will be introduced. The work will be concluded with a conclusion and its discussion.","spectrum sensing; wideband; compressive; real-time; non-uniform sampling; usrp; software defined radio; multiprocessing","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Circuits and Systems","","Electrical Engineering","","51.998922, 4.373499"
"uuid:2b9a2e1a-36c3-464b-a421-2bac202fb952","http://resolver.tudelft.nl/uuid:2b9a2e1a-36c3-464b-a421-2bac202fb952","From 0 to 1: Seizing the opportunity for a new microservice development environment","Van der Veer, A.L.","Santema, S.C. (mentor); Geraedts, J. (mentor)","2015","Welcome to the information age, a period in human history that shifts the focus of our economy from traditional industry to one based on data processing and information technology. The Internet is the invention that stands at center of this new age, it connects billions of people around the planet in a single network. The apps on our smart phone, laptop and tablet are at one side of this network, while “servers” are the computers on the other side. These machines handle and store our most precious family photos, intimate emails and financial information. Consumers expects the software that runs on these servers to handle thousands of simultaneous connections while being secure, safe and always online. As the number of customers grew, so grew the complexity of this software in order to meet the increasing demand. “Microservices” is a novel approach thats aims to cope with the complexity of server software. In such architectures, large software processes are turned into many smaller programs that are each responsible for a specific subset of the overall functionality. This in turn, makes it easier to split the work across multiple developers. One person (or team) can be made responsible for the complete life-cycle of a single microservice and is therefore able to innovate independently. This makes sense in an “Agile” environment: a methodology that encourages iterative development and fast failures. But will only work when services can be put in operation quickly after a new iterations has finished development; often driven by automation this practice of bringing operations closer to development is called “DevOps”. Even though microservices are separated in terms of innovation, on a technical level it is often impossible to decouple them completely from one another. During user research it was shown that such coupling between services causes problems for the DevOp when he is developing and testing new code on his workstation. The coupling makes it very hard to isolate a single microservice: working on it means installing and running all the other services it depends on, and the services that those depend on, and so forth. First, this lack of isolation forces the DevOp to install, update and configure dozens of programs before he or she can begin development. Secondly, due to the nature of microservices, a single test can cause the environment to change in many different places; resetting the environment for another test becomes tedious, error prone and time consuming. “Docker” is a new technology that has seen rapid adoption in 2014. It allows software processes to be encapsulated in a standardized fashion, so called “containerization”. While large firms are focusing on developing Docker solutions for data center operations, industry analysis shows there is little competition when it comes to using Docker for the development and testing of server software. With a series of prototypes it is shown that Docker can be used to solve the before-mentioned isolation problems and help the DevOp with developing and testing his microservice. With further prototyping, the solution was developed into a concept called “Dockpit”. Dockpit provides an interface that allows DevOps to quickly fix the dependencies of their microservice into place and store this information in a single file that lives next to the source code of the service. On command, it reads this file and passes the instructions to Docker to let it take care of (re)setting the environment. Dockpit makes it possible to isolate a microservice in minutes and reset the environment in seconds with the push of button. Two user validation studies were setup to verify if Dockpit was capable of delivering value to the target group. The first study was setup to measure product-solution fit and took place at the company Giant Swarm. Through multiple hands-on sessions it was shown that the lack of isolation was an important barrier for the company and prevented them from testing their microservices consistently to gain a better grip on the stability of their product. The second validation was aimed at measuring product-market fit. A minimal viable product was created and released online, three press released were published to drive initial traffic from the marketplace. A total 336 subjects visited the homepage and only 9% of them continued to download Dockpit, none of the users were retained after 2 days. These results show that product-market fit has not yet been reached. Continued effort will be put into simplifying and reshaping the product towards a better product-market fit. Additionally, more emphasis needs to be put on the design and validation of a business model.","microservices; dockpit; software; programming; server; product owner","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Product Innovation Management","","","",""
"uuid:3a15293b-16f6-4e9d-b6a2-f02cd52f1a9e","http://resolver.tudelft.nl/uuid:3a15293b-16f6-4e9d-b6a2-f02cd52f1a9e","In Dependencies We Trust: How vulnerable are dependencies in software modules?","Hejderup, J.I.","Van Deursen, A. (mentor); Mesbah, A. (mentor)","2015","Web-enabled services hold valuable information that attracts attackers to exploit services for unauthorized access. The transparency of Open-Source projects, shallow screening of hosted projects on public software repositories and access to vulnerability databases pave the way for attackers to gain strategic information to exploit software systems using vulnerable third-party source code. In this thesis, we explore the character of JavaScript modules relying on vulnerable components from a dependency viewpoint. We studied the npm registry, a popular centralized repository for hosting JavaScript modules by using information from security advisories in order to determine: prevalence of modules depending on vulnerable dependencies, the propagation in the dependency chain and the time window to resolve a vulnerable dependency. This was followed by a qualitative study to understand dependency management practices in order to investigate why dependencies remain unchanged. The outcome of this study shows that one-third of the modules using at least one advisory dependency resolve to a vulnerable version. The qualitative study suggested that a majority of the modules lacked awareness or discussion about known vulnerabilities. Furthermore, the key findings indicate that the context use of the module and breaking changes are potential reasons for not resolving the vulnerable dependency.","Software Security; JavaScript; Node.js; Known Vulnerabilties","en","master thesis","","","","","","","","2015-05-14","Electrical Engineering, Mathematics and Computer Science","Department of Software Technology","","Software Engineering Research Group","",""
"uuid:ce16a77c-f286-4dd7-89ba-5b38ccb2cf68","http://resolver.tudelft.nl/uuid:ce16a77c-f286-4dd7-89ba-5b38ccb2cf68","Network-as-a-Service Architecture with SDN and NFV: A Proposed Evolutionary Approach for Service Provider Networks","Manthena, M.P.V.","Kuipers, F.A. (mentor); Van den Broek, C. (mentor)","2015","The Internet continues to grow exponentially with proliferation of devices and users being connected to it along with an exploding demand for various resource and performance intensive network services like multimedia content distribution, security, mobility, and machine-to-machine (M2M) communications. However, the current TCP/IP (Transmission Control Protocol/Internet Protocol) based Internet architecture, which was developed over 40 years ago and was not prepared nor designed to successfully meet such explosive demands of today, is leading to the growing ossification of the Internet with its increasingly closed, complex, and rigid state. Thus, limiting innovation in such networks and their corresponding services. To overcome this ossification problem of the Internet coupled with a lack of innovation in provisioning and management of network services, more and more service providers and network operators are embracing the concept of virtualization for their networks. This trend is largely inspired by the recent success of cloud-based service models along with their chief enabler virtualization in addressing similar problems in the computing and storage fields of Information Technology (IT). Although recent advances in the field of networking are witnessing new virtualization enabling network technologies being proposed, it is still a challenge to logically combine a set of them to realize cloud-based service models for service provider networks. This situation is mainly due to the concerns over these proposed technologies in terms of scalability, reliability, interoperability, and disruptive nature. In this thesis, an evolutionary approach to implementing the Network-as-a-Service (NaaS) cloud-based service model for service provider networks is proposed with Software-Defined Networking (SDN) and Network Function Virtualization (NFV) as its key virtualization enabling network technologies. In essence, the proposed evolutionary approach realizes the major benefits of network virtualization such as vendor-neutrality, simplicity, and flexibility while successfully addressing the stated concerns over SDN and NFV technologies in the proposed NaaS architecture. Furthermore, a proof-of-concept (PoC) implementation of the proposed NaaS architecture on a physical network testbed is demonstrated along with an innovative provisioning and management of basic network connectivity services over it. Finally, the proposed evolutionary approach is validated by an experimental performance evaluation of the PoC physical network testbed along with the recommendations for improvement and future work.","Network Function Virtualization; Network Virtualization; Network-as-a-Service; Service Provider Networks; Software-Defined Networking","en","master thesis","","","","","","","","2015-02-20","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","Network Architectures and Services Group","",""
"uuid:4417c6d3-b90a-4f25-9b16-a877fd941aad","http://resolver.tudelft.nl/uuid:4417c6d3-b90a-4f25-9b16-a877fd941aad","Strengthening the Exact concept by demonstration of key values by means of a physical product.","Van Oorschot, B.H.","Pasman, G.J. (mentor)","2014","In this project for Exact Online, a physical product was designed to accompany the online software. The product was named the Exact Online Skylight. The product scans invoices directly into the software by use of a camera module.","Internet of Things; administration; business software","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Strategic Product Design","","","",""
"uuid:d41eae46-316e-4791-9f00-ee320c8d2762","http://resolver.tudelft.nl/uuid:d41eae46-316e-4791-9f00-ee320c8d2762","Value creation through network infrastructure automation: The Software-Defined Networking Technology and its business model from the IT service provider’s perspective","Liaros, P.","Bouwman, H. (mentor); Van der Duin, P. (mentor); Van der Bijl, W. (mentor)","2014","Our research is exploring how the computer network technology of Software-Defined Networking (SDN) can benefit an IT service provider. SDN is a novel technology that is capable of managing a whole network based on a single centralized entity. As a result an SDN enabled network becomes more flexible, automatable and programmable to satisfy users’ needs. SDN with its automation features can enhance cloud services, and mainly the well-known Infrastructure as a Service (IaaS), as it can significantly reduce the services’ delivery time and the costs for their provisioning. More specifically, this thesis emphasizes on how the IT service provider’s datacenter services can be improved both for its own benefit and of course for the benefit of its customers as well. Our approach for the exploration and exploitation of the novel technology, is the design of a business model enabled by SDN technology. Primary stakeholder and user of the business model is the IT service provider. Additionally, we have to specify that this research is based on a single case study that is executed in collaboration with the Dutch branch of Capgemini. For the completion of our research we are making extensive literature review and interviews with experts of the field. In the next lines of this summary we are going to briefly mention the contents of each chapter as well as our findings, conclusions and recommendations for each one of them. The maser thesis consists of six chapters, which are interconnected as a logical continuation of each other. More specifically: Chapter 1 – Introduction. This is the chapter that is making a general reference to the contents of the master thesis, the issue stakeholder, the research objectives, and research questions. Issue stakeholder is the IT service provider, and our three research objectives are: “the identification of the current state and future trends of the SDN technology both in the market and the academia”, “the identification of the business model framework that supports the case of the SDN technology adoption from an IT service provider” and “the design of a business model for delivery of SDN enabled services”. These three research objectives are translated to three relevant research questions. All three research questions are giving answer to the main research question. Chapter 2 – State of the art of SDN - Current status and future trends. The second chapter is giving answer to the first research question: “What is the state of the art of SDN?”. It is related with the state of the art of SDN and its future trends. A brief summarized answer would be that SDN is currently still in its early adoption phase and OpenFlow is the dominant enabling protocol. Moreover, there are many different SDN controllers in the market and much more are the nonproprietary projects that big established infrastructure vendors support. Use cases of SDN are focusing on: network management and availability, security assurance and innovative wireless implementations. The diffusion of SDN technology for the entire market is yet to come. The future of the SDN technology is forecasted to be full of new developments that will be open source oriented. The SDN market is constantly growing and the academia is eager to keep exploring and exploiting the domain. The outcomes of chapter two are taken in consideration for the design of the business model in chapter five. Chapter 3 – Business Model Literature Review. In the third chapter of the master thesis is given answer to the second research question: “Which business model framework best supports the design of an SDN business model?”. For this answer an extensive comparison of three different business model frameworks is made. The STOF, VISOR and Canvas frameworks are compared and as the best fitting framework to the case of SDN is chosen to be the STOF framework. Chapter 4 – Research design methodology. In the fourth chapter the research design of the thesis is made explicit. Through the chapter is analyzed the research design methodology, the data collection methodology and the way that the chosen business model framework is used. We are making use of the design science research of (Vaishnavi & Kuechler, 2007) and the design cycle for the design oriented researches of (Verschuren & Hartog, 2005). Plus we are extracting all the necessary data for our research through interviews with experts. Chapter 5 – Business model design. The fifth chapter is the initiation of the second part of the thesis where the business model is designed and the third and last research question is answered: “Based on the selected business model framework, how does a viable business model that integrates SDN technology in an IT service provider’s datacenter infrastructure, looks like?”. Chapter 6 – Discussion and Conclusions. Conclusions, limitations as well as future research, in addition with some recommendations and reflection of the whole thesis, are included in this chapter. Much more the main research question is answered as well.","SDN; Software Defined Networking; Value creation; automation; IT service provider; business model; STOF; design science; computer networks; innovation; sourcing","en","master thesis","","","","","","","","2015-11-04","Technology, Policy and Management","Information and Communication Technology","","Management of Technology","",""
"uuid:8f3a48b4-ac7c-4c7c-8340-fc7dd7865563","http://resolver.tudelft.nl/uuid:8f3a48b4-ac7c-4c7c-8340-fc7dd7865563","Increasing robustness of Software-Defined Networks","Van Asten, B.J.","Van Adrichem, N.L.M. (mentor)","2014","In this thesis, an overview on performed research is given to investigate possible enhancements and solutions to enable SDN as future network paradigm. Currently, beside robustness, problems exist on scalability and security with the application of SDN to current network infrastructures. On robustness, current research do not provide the necessary solutions to detect failures and activate protection schemes to failover to pre-configured backup paths within the set failover requirements. We will attempt to solve the problems to reduce the failover times on Ethernet IP networks with the application of active link monitoring and advanced capabilities of the OpenFlow protocol. To enable protection scheme, a routing algorithm is required that provides link-based protection. We propose a protection algorithm that guarantees protection, minimizes path cost on primary path and discovers protection paths for intermediate switches on the primary path with the main purpose to minimize failover times, optimize network traffic and reduce the need for crankback routing. In short, we provide a complete solution to increase the robustness of Ethernet IP networks to the level of carrier-grade and industrial networks with the application of a link-based protection scheme and optimal routing algorithm, combined into a Software Designed Networking solution.","Software-Defined Networks; robustness; recovery","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","Network Architectures and Services","",""
"uuid:c3be7d12-5772-47cc-8257-fae559c99f5f","http://resolver.tudelft.nl/uuid:c3be7d12-5772-47cc-8257-fae559c99f5f","A two-step MCDM methodology for heterogeneous situations","Verloop, B.W.J.","Van Wee, B. (mentor); Rezaei, J. (mentor); Van de Kaa, G. (mentor)","2014","The focus of this research is on Multi Criteria Decision Making (MCDM) in complex multi actor environments. Within the field of MCDM, the Analytical Hierarchy Process (AHP) is known for its usage in complex situations. Therefore the focus will be on AHP as a well-known and often used MCDM. This study can function as a benchmark for other MCDM applications in complex multi actor environments. A case study, regarding the widespread acceptance of electric cars, is conducted in order to test for differences in results while applying AHP in homogeneous versus heterogeneous environments. When multiple decision makers are interviewed, for the purpose of AHP application, their judgments will differ and they should, as a group, take all criteria into consideration and seek political consensus. This works well in a decision situation where one decision has to be made and the group of decision-makers is homogeneous; in situations where the group is heterogeneous it is difficult to come to a consensus. When consensus cannot be reached the other option is to calculate the mean. When for example comparing the maximum range and the amount of reduced CO2 footprint in the case study, someone with a background in green energy production thinks the CO2 part is much more important than range. On the other hand someone from the traditional car industry thinks range is way more important than the CO2 footprint. The representing values in the AHP analysis will then be: 1/9 & 9. When applying the geometric mean, a 1 is used in the model. Which basically says these two criteria are equally important. This means these criteria will not play a key role in the calculations. However when looking at the individual preferences this pairwise comparison is a very important one. The assumption is that this misinterpretation of values is due to heterogeneity in groupings, while homogeneity is assumed. Therefore better grouping, which seeks more homogeneity, is needed when the AHP method is applied in complex multi actor situations. In order to reach this homogeneity, multiple tools have been investigated. The technique found to be most suitable for this research is market segmentation. Usually this tool is applied in marketing and its usage in combination with the AHP method is applied for the first time in this research. After the tool selection process it needed to be tested. This is done by means of the case study. In this case study a survey is conducted that consisted of two types of questions. Part one consists of market segmentation questions, which enables us to segment the participants based on generic features such as gender and age. Part two consists of AHP related questions that enables us to calculate the weights per (sub) criterion. After the survey is conducted two different analyses were performed. The first one, method A, applied the AHP method as it currently exists. The second, method B, includes the market segmentation part. First the heterogeneous group is reordered into several homogeneous groupings and for each of these groupings the AHP analysis can be conducted. When homogeneous segments provide the input, the homogeneity axiom is met again. The aim was to find differences in results for method A and B. In order to analyze all possible scenarios a special software tool has been developed which is able to calculate 500,000 scenarios within 8 hours. These calculations provided the knowledge that different market segments indicated different criteria as being their most important one. The application of method A indicated that ‘emissions’ is the most important factor regarding the widespread electric car acceptance. However, when applying method B, multiple segments are indicated in which other factors are more important. Therewith the added value of the newly suggested method has been proven. It is interesting that different segments can be identified when using different factors. Since there is freedom in which factors are being used, the analysis ensures a wide range of applicability, from policy makers towards executive boards. The findings of this research can introduce a future standard in MCDM, and especially in AHP application, in complex multi actor situations. Besides this it can be a basis for further research in this new field of application. In order to provide a robust basis a framework has been developed: “The two-step approach towards group decision making in complex multi actor environments”. Better use of the AHP method in complex multi-actor situations can be accomplished by ensuring that all groups that provide input, for the actual AHP analysis, are homogeneous. When groups are heterogeneous, as often is the case in complex multi-actor scenarios, the suggested framework should be applied in order to ensure better use.","heterogeneity; AHP; complex; multi-actor; market segmentation; software tool","en","master thesis","","","","","","","Campus only","2015-02-27","Technology, Policy and Management","TLO","","Systems Engineering, Policy Analysis and Management (SEPAM)","",""
"uuid:77a1d7a3-7475-453a-b303-c22e6c779a0b","http://resolver.tudelft.nl/uuid:77a1d7a3-7475-453a-b303-c22e6c779a0b","An Empirical Evaluation of and Toolkit for Asynchronous Programming in C# Windows Phone Apps","Hartveld, D.L.","Van Deursen, A. (mentor); Dig, D. (mentor)","2014","Microsoft has introduced the async/await keywords in C# 5.0 to support developers that need to apply asynchronous programming techniques. However, do developers really use the new keywords, and do they use them correctly? An empirical survey of 1378 open source repositories from GitHub and CodePlex shows that developers often make mistakes. By providing live feedback in the IDE, and by providing a refactoring tool to automatically refactor legacy APM-based code to modern async/await-based code, developers can be supported in using the new language feature correctly. An evaluation of the developed tools shows that they are useful: GitHub pull requests based on patches generated with the developed tools were readily accepted by several open source projects.","software engineering; asynchronous programming; empirical survey; refactoring; async/await","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:bb857573-ca20-44ee-921a-30e66ee3767c","http://resolver.tudelft.nl/uuid:bb857573-ca20-44ee-921a-30e66ee3767c","Effects of Refactoring on Productivity in Relation to Code Understandability","Ammerlaan, E.","Zaidman, A.E. (mentor); Veninga, W. (mentor)","2014","Depending on the context, the benefits of clean code with respect to understandability might be less plain in the short term than is often claimed. This work has studied a system with legacy code in an industrial environment to evaluate if giving ‘clean code’ to developers would immediately lead to increases in productivity. They were given refactored components and were assigned small coding tasks to complete. Contrary to our expectations, we observed both increases as well as decreases in understandability, showing that immediate increases in understandability are not always obvious. This study suggests that negative effects could have been caused by the fact that the test subjects were used to long methods rather than a decomposed design. Another finding is that unit tests, accompanying refactorings, can lead to more substantial increases in productivity. Secondly, developers tend to implement higher quality solutions when working with clean code. A recommendation to improve the net return on refactoring is to not just refactor to improve its understandability, unless one has additional motives, such as easing maintenance or increasing testability.","refactoring; code quality; software re-engineering; unit testing","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:f130c73b-4e40-4e0e-857d-ba66bf0afa06","http://resolver.tudelft.nl/uuid:f130c73b-4e40-4e0e-857d-ba66bf0afa06","Cell Concentration Sensor for Micro-Bioreactors: Software & Data Processing","Keijsers, J.G.M.; Mahabir, A.S.U.","Bossche, A. (mentor); Bastemeijer, J. (mentor)","2014","This thesis was written for the bachelor’s final project, of the study Electrical Engineering. It describes the design of a cell concentration sensor. A proof of concept was designed which is able tomeasure the cell concentration of yeast in a suspension by using sensors which measure the impedance and optical properties of the suspension. The project is divided into three parts where each duo will have their own focus. This thesis focuses on the software and data processing subsystem. The other pairs will focus on the optical and impedance sensors The signals from both sensors are converted to digital values. To process the raw data, filters are designed to condition the signal. Using the filtered data, routines are implemented to calculate the cell concentration. This value is transferred to a computer, where a user interface is implemented to graphically display the progress of the cell concentration. Using only the optical sensor, the measurements were accurate to about 40% in the range 0-10 g/l, and 10% in the range 10-120 g/l, making the total range 0-120 g/l. The impedance sensor was extend the range of the measurements to 0-150 g/l with an accuracy of 15%. However, the calculations needed to convert impedance values into cell concentrations were too sensitive for the microcontroller The routines for the optical sensor were successfully implemented on an Arduino microcontroller showing promising results. This was achieved with the help of a graphical user interface which was designed for validation when actual yeast measurements were performed. Although the routines for the impedance sensor yielded correct results for capacitance, the implementation of the code on the Arduino gave erroneous values when cell concentration was measured.","cell; concentration; sensor; optical; impedance; software; processing; Applikon","en","bachelor thesis","","","","","","","","2019-07-03","Electrical Engineering, Mathematics and Computer Science","Electronic Instrumentation","","Electrical Engineering","",""
"uuid:ceadc7fb-b86f-4755-a465-a30550b15dcb","http://resolver.tudelft.nl/uuid:ceadc7fb-b86f-4755-a465-a30550b15dcb","Development and Reuse of Engineering Automation","Dewitte, P.J.A.R.","Van den Berg, T. (mentor)","2014","Increasingly engineers in, for example, Aerospace Engineering create software to support their daily engineering activities. This is referred to as Engineering Automation. A prime example of Engineering Automation is Knowledge Based Engineering. It is desirable to reuse and share this software, rather than to discard it soon after its creation. Unfortunately, the overall level of sharing and reuse in daily engineering automation practice is currently low. Producing reusable applications proves to be difficult for engineering automation developers. An initial study comprising a literature review and expert interviews showed that the two main issues are the understandability and validity of the software and documentation. The study also provided insight in the current Engineering Automation culture. The most important aspect identified is the lack of incentives for software activities other than coding itself. Based on the initial study, a software design tool based on incremental code and design documentation generation was selected as the most suitable approach to start tackling the issues identified. To contribute to understandability and validity, and ultimately reuse, the tool aims to encourage the creation of accurate design documentation and to encourage the creation of that documentation before implementing the corresponding code. Creating a design beforehand encourages a well thought and understandable application structure, yet this is rarely done in an Engineering Automation context. The approach was implemented for a specific community of Engineering Automation developers, namely users of the GenDL software framework. The resulting tool, GenDL Designer, features a simplified version of the Unified Modeling Language, continuous consistency checking with the code and support for incremental resolution of inconsistencies, e.g. by generating code skeleton fragments or by proposing design diagram modifications. GenDL Designer was developed with Engineering Automation developers in mind and therefore differs significantly from general software engineering tools with similar objectives. To address the potential and feasibility of incremental code and design documentation generation for engineering automation development, a large-scale academic experiment with GenDL Designer is planned in spring 2014. In anticipation of that, trial runs were held, which only allow for preliminary conclusions. GenDL Designer seems to encourage the creation of accurate design documentation and seems to encourage designing before implementing. The principle of incremental code and design documentation generation appears to have the potential to improve the understandability of applications, the validity of their documentation and even the validity of the code itself, due to the improved transparency that uncovers defects. Finally, introducing incremental code and design documentation generation in an engineering automation context appears to be feasible, but some potential users will not be convinced with a short introduction alone. These promising but preliminary findings will hopefully be confirmed with the large scale academic experiment and later on with experiments in industry.","Engineering Automation; Model-Based System Engineering; Code generation; Professional End-User Development; Software Reuse","en","master thesis","","","","","","","","","Aerospace Engineering","Flight Performance and Propulsion","","System Engineering and Aircraft Design","",""
"uuid:f20738e6-132e-46ed-ac06-7ede1054c6ec","http://resolver.tudelft.nl/uuid:f20738e6-132e-46ed-ac06-7ede1054c6ec","Redesigning social mechanisms in digital calendars to better support a flexible lifestyle","Hartong, J.","Romero Herrera, N.A. (mentor); Van der Helm, A.J.C. (mentor)","2013","Though lifestyles have become increasingly more flexible in the past few decades with concepts like last-minute bookings and flex working, we still try to plan our lives with a rigid tool: the digital calendar. Calendar42 is a Dutch start-up that strives to re-invent the digital calendar and with that the way people plan their life. Having grown to a team of eleven in its three years of existence, it now offers a platform that intelligently connects their own plans with the plans of other people, organisations and systems surrounding them. By developing a flexible, open platform Calendar42 aims to become an important platform for distribution and consumption of time-related information and put the calendar in the centre of people’s daily life. Purpose Calendar42’s main asset have been around creating intelligent services to support the individual users. As many planning challenges are strongly related to social challenges, Calendar42 wants to gain insight into how to apply their underlying system inside the social context. Existing digital calendars already support several (rigid) social mechanics such as the ability to invite participants to meetings or share availability with others, but do not offer ways to keep up with the continuously changing priorities and new arising opportunities inside this always connected society. In order to design for these social planning mechanisms it’s crucial to understand current flexible social planning mechanisms and explore how these could be translated inside the context of a digital calendar. These social planning mechanisms should fit the strengths and values of Calendar42. Special Methodology The study contains a broad exploration of the main three components of the assignment: calendars, flexible lifestyles and social mechanisms. Within this exploration a combination of literature studies, case studies and context research will be used. The exploration will be used to form design guidelines and directions that will be used in an iterative design process. Findings The polychronic time sense has been found to be a good concept to describe the flexible lifestyle. Its characterised by a loss of social and temporal boundaries, and allows to respond last-minute on continuously changing priorities and opportunities. Fitting planning tools that mix these dynamics with some much needed structure still lack. There is a big opportunity for Calendar42 to support this desired structure by developing a social planning assistant that enables interaction between system and crowd intelligence. Offering social planning mechanisms in which users are supported by the system to coherently plan in groups. For this collaboration to be successful, a high level of translucency is key: people should always be accountable for their actions through a high level of visibility and awareness of the social context. Furthermore, in order to serve a wide range of planning scenarios and group structures, the offered mechanisms should be developed to be open-ended and allow for social self-regulation. Conclusions and recommendations The final solution, Gather, enables groups to shape events inside a conversation with a mix of free-form messages and explicit proposals of event details. The system intelligence supports the users to make relevant proposals leaving the decision making up to the group dynamics. Gather and the design guidelines should offer Calendar42 a good base to develop a successful social planning solution for the current dynamic planning needs.","calendar; planning; time sense; polychronicity; software; mobile; interfaces; service; flexible lifestyles","en","master thesis","","","","","","","Campus only","2014-11-29","Industrial Design Engineering","Design Conceptualization and Communication","","Master of Science Design for Interaction","",""
"uuid:076ec1f6-d6c6-4266-afc5-dd71e24d65b2","http://resolver.tudelft.nl/uuid:076ec1f6-d6c6-4266-afc5-dd71e24d65b2","Automated detection of performance regressions in web applications using association rule mining","Zaleznicenka, Z.","Zaidman, A.E. (mentor)","2013","Performance testing is an important stage of developing web applications intended to operate with high availability under severe load. However, this process still remains to a large extent elaborate, expensive and unreliable. Most often the performance testing activities are being done manually, and this significantly affects development time and costs. This thesis report describes an approach aimed at automating the analysis of performance tests by maintaining a repository with the results of previously completed test runs and comparing them with the new runs to reveal deviations in software performance behaviour. Detection of performance degradations is executed in a fast way using well-known data mining techniques. The results of conducted case studies clearly indicate that the suggested approach may successfully assist software engineers with detecting performance regressions in the evolving software.","performance testing; software engineering; regression detection; association rule mining","en","master thesis","","","","","","","","2013-11-06","Electrical Engineering, Mathematics and Computer Science","Software and Computer Technology","","Computer Science","",""
"uuid:5ad41677-3bc8-4b6b-afc5-90dc02851fe9","http://resolver.tudelft.nl/uuid:5ad41677-3bc8-4b6b-afc5-90dc02851fe9","How to integrate UX research in an Agile process","Studer, M.","Witteman, A. (mentor); Pasman, G.J. (mentor); Van Kuijk, J.I. (mentor)","2013","Problem Statement The most standard used process for developing software is currently Agile. Agile splits up the design process in so called two week “sprints” thereby making this process fast and efficient for developers. But current methods of usertesting don’t fit with this fast-paced process as, they are time consuming and too extensive. Therefore results from the user tests become mostly “out-dated” and irrelevant for teams. The same was the case at IceMobile, which is Holland’s largest mobile application producer, who developed award winning applications such as the Albert Heijn and the ABN AMBRO application. IceMobile shifted from an agency model to developing own products for the retail sector, making IceMobile responsible for the quality of the user experience (UX) of their products. Having a usertesting process that fitted with their Agile process became important An empirical study was performed with two product teams, which showed developers having little knowledge about and empathy for their users. Result My proposal for IceMobile are the Flags sessions, containing a frequent process where the teams of IceMobile collaboratively analyse and evaluate the feedback of the user, by watching the interview of the UXlab via a live stream connection. With an effective way of clustering all insights and a fast interview process, the whole session only takes 1,5 hour over two days for the whole team. By frequently conducting flag sessions every sprint, the team builds up knowledge, creates more empathy, which makes results become more reliable. The sessions are flexible and take minimum amount of time, making them fit well with the Agile process of IceMobile. The Flag sessions will involve the team in the usertests, thereby reducing the time of the usertesting process while increasing empathy for the user by the team. Therefore the team comes out of the sessions with more inspiration and motivation to change or develop new features for the user.","Agile; UX; software; usertesting; usertest; developers; app; mobile; research; Scrum; user experience; empathy","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Design for Interaction","","Final Master Project","",""
"uuid:91601ffa-d704-45ab-a528-a3b83224fed7","http://resolver.tudelft.nl/uuid:91601ffa-d704-45ab-a528-a3b83224fed7","Exploring Characteristics of Code Churn","Kraaijeveld, J.M.","Zaidman, A.E. (mentor); Bouwers, E. (mentor); Visser, J. (mentor)","2013","Software is a centerpiece in today’s society. Because of that, much effort is spent measuring various aspects of software. This is done using software metrics. Code churn is one of these metrics. Code churn is a metric measuring change volume between two versions of a system, defined as sum of added, modified and deleted lines. We use code churn to gain more insight into the evolution of software systems. With that in mind, we describe four experiments that we conducted on open source as well as proprietary systems. First, we show how code churn can be calculated on different time intervals and the effect this can have on studies. This can differ up to 20% between commit-based and week-based intervals. Secondly, we use code churn and related metrics to automatically determine what the primary focus of a development team was during a period of time. We show how we built such a classifier with a precision of 74%. Thirdly, we attempted to find generalizable patterns in the code churn progression of systems. We did not find such patterns, and we think this is inherent to software evolution. Finally we study the effect of change volume on the surroundings and user base of a system. We show there is a correlation between change volume and the amount of activity on issue trackers and Q&A websites.","code churn; software evolution; software metrics; classification","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Computer Science","",""
"uuid:3758d2cb-d0af-45d1-9f14-93e1c86f60ed","http://resolver.tudelft.nl/uuid:3758d2cb-d0af-45d1-9f14-93e1c86f60ed","Analyzing the Evolution of WSDL Interfaces using Metrics","Kalouda, M.","Zaidman, A. (mentor); Houben, G.J. (mentor)","2013","Recent studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code. While the indicative power of these metrics was validated for several systems, it has not been tested on Service-Oriented Architectures (SOA). In particular, the SOA paradigm prescribes the development of systems through the composition of services, i.e., network-accessible components. In one implementation of SOA which is very popular in industry, services are specified using WSDL interface descriptions. Thus, service consumers are highly affected by the changes performed on an evolving WSDL interface. This fact reveals the importance of assessing the change-proneness of interfaces in SOA. This work aims at investigating the correlation between several cohesion and data type complexity metrics and the change-proneness of a WSDL interface. We empirically investigate the correlation between the number of fine-grained interface changes and complexity and cohesion metrics including a newly defined data type cohesion (DTC) metric. Furthermore, we perform a manual analysis of the interfaces to gain better insight to our conclusions. We performed these measurements on multiple versions of ten widely used, open-source WSDL interfaces. Our results show that data type complexity expressed in number of nodes is an appropriate metric to represent data type complexity but not sufficient to predict the change-proneness of an interface. In addition, we investigate three other cohesion metrics: LCOS, SFCI and SIDC presented in the literature and the newly designed DTC metric. Our empirical study shows that among the tested metrics it is the DTC cohesion metric that exhibits the strongest correlation with the number of fine-grained changes performed in subsequent versions of WSDLs. Finally, based on the DTC metric results about the cohesion in data types, we manually analyzed the examined WSDLs and we conclude that highly referenced data types are less change-prone.","SOA; change-proneness; WSDL; service; software evolution; metrics","en","master thesis","","","","","","","","2013-09-18","Electrical Engineering, Mathematics and Computer Science","Software Engineering Research Group","","Computer Science/Information Architecture Track","",""
"uuid:9a70ef3f-aff5-4007-8728-cbda5d82481a","http://resolver.tudelft.nl/uuid:9a70ef3f-aff5-4007-8728-cbda5d82481a","Analysis of fracture network geometries and orientations within a fold-and-thrust structure in the Northern Apennines, Italy","De Vries, H.C.; Benthem, M.","Bertotti, G. (mentor)","2013","This research focuses on fracture networks in sedimentary rocks within the Umbria-Marche fold-and-thrust belt in the Northern Apennines, Italy. The aim of this research is twofold, namely to correlate the geometry of fracture networks with tectonic position and lithology and to correlate the orientation of fracture networks with the origination of a fold-and-thrust structure. The fold-and-thrust belt within the area strikes about N160° and developed in the Miocene within a compressional regime as the result of the collision between the European Corsica-Sardinia Margin and the Adriatic plate, accompanied by back-arc extension due to rollback. In order to analyze geometries of fracture networks software named DigiFract is used to digitize outcrops in the field. Fracture orientation, density, spacing, height and termination are analysed for different lithologies within the Umbria-Marche succession. Orientations of fracture sets are correlated to different structural stages during the development of the fold-and-trust structure. The first stage in which fractures develop is layer parallel shortening, during which bedding-normal pressure-solution surfaces develop, striking parallel to the hinge line. Subsequently, longitudinal joints striking parallel to the hinge line develop during fold initiation. This is followed by amplification and tightening of the fold, causing development of transversal joints, striking perpendicular to the hinge line. In theory, fold limbs are preferred sites for deformation within an active-hinge fault-related anticline, rather than the corresponding anticlinal crests (Salvini and Storti, 2001; Salvini and Storti, 2004). Our data is not substantial to proof this theory, but is in accordance to it. Chert, primarily present in the Maiolica Fm and the Diasprini Fm, and marl, present in the Bisciaro Fm, act as non-fractured or very low density-boundary between fractured beddings. Siliclastic turbidites are less fractured than carbonates at similar tectonic positions.","fracture network geometry; DigiFract software; fracture orientation; fold-and-thrust structure","en","bachelor thesis","","","","","","","","2013-07-12","Civil Engineering and Geosciences","Geoscience & Engineering","","Applied Geology","",""
"uuid:1927af05-c726-4da4-9c09-b5fca54884bc","http://resolver.tudelft.nl/uuid:1927af05-c726-4da4-9c09-b5fca54884bc","A Technology Roadmap for Software Platform Products","Sugavanam, S.","Van den Berg, J. (mentor); Scholten, V. (mentor); Van Vuren, P. (mentor)","2013","This research project presents the technology roadmap for software product platforms covering all the aspects of software engineering choices including functional features, technology choices, architecture changes, operational requirements and software process improvements. The developed technology roadmap facilitates the decision making on prioritizing the content for the strategic release planning activities. In order to develop the technology roadmap, different research phases have been accomplished, including investigating a wide range of scientific and industry papers, asking platform management & product managers about their needs regarding roadmap, integrating the received information in the roadmap, and undertaking an evaluation process of the developed roadmap and approach.","Technology Roadmap; Software Platforms; Software Engineering","en","master thesis","","","","","","","","2014-08-31","Technology, Policy and Management","Management of Technology","","","",""
"uuid:90323d56-d208-401e-8d3c-44bec4fca4f4","http://resolver.tudelft.nl/uuid:90323d56-d208-401e-8d3c-44bec4fca4f4","Evaluation of Behavior-Driven Development","Horn Lopes, J.A.","Gross, H.G. (mentor)","2012","Behavior-Driven Development is a recent addition to the family of Agile software engineering methods; the software engineering process of Behavior-Driven Develop- ment has not yet been extensively documented. We have therefore created a structured description of this process based on literature, and applied the process in a case study to evaluate if it provides stakeholders with enough information to successfully com- plete a project. The results of this evaluation show us a number of issues with the existing process. We suggest additions and clarifications to mitigate these issues and evaluate these propositions in the second part of the case study. This shows us that most evaluated changes are an improvement to the process: a more complete software engineering process for Behavior-Driven Development is achieved by incorporating our suggestions.","behavior-driven development; test-driven development; acceptance test-driven development; software development process; agile","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering Research Group","",""
"uuid:b3f1eeba-127c-406b-87ec-2aa4550dadc2","http://resolver.tudelft.nl/uuid:b3f1eeba-127c-406b-87ec-2aa4550dadc2","ArchWiki: Using Web 2.0 for Architecture Knowledge Management","Liauw, D.A.","Pinzger, M. (mentor)","2012","Software architecture plays an important part in program comprehension, which is one of the most time consuming tasks in software development. If software developers don’t properly share their architectural knowledge with team members, the team will act based on an incomplete or even possibly incorrect view on the code base, and this can lead to architectural degradation. Recently there has been a surge of collaboration, communication and sharing with the advent of Web 2.0 applications. In this thesis we have investigated how Web 2.0 can be used to support software architecture management. In particular in the area of architecture documentation, architecture retrieval, and collaboration. We created an approach which applies Web 2.0 concepts such as traceability, integration, usability, navigability, and user experience, to software architecture management. This approach is supported by a prototype tool called ArchWiki, which has features such as traceability between different artifacts (e.g. source code, architectural diagrams, architectural documentation), context-sensitive views, hyperlinks, notifications, tags, and bookmarks. We performed an initial evaluation study to assess ArchWiki. In this study we found that Web 2.0 has the potential to support software architecture knowledge management.","Software Architecture; ArchWiki; Web 2.0; Collaboration; Architecture Knowledge Management; Program comprehension; Traceability; Awareness; Software evolution; Wiki; Notifications; Documentation; Context-sensitive; User Centric; Integration; The Long Tail; Social Experience; Meta Data; Action Tracking; Sharing; Usability","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:4cb45ec4-89f1-4253-8c8a-55e57f35d9ec","http://resolver.tudelft.nl/uuid:4cb45ec4-89f1-4253-8c8a-55e57f35d9ec","Measuring Quality Improvements After Stimulating Software Quality Awareness Among Developers","Alidarso, R.","Zaidman, A. (mentor)","2012","Software systems are getting larger and more complex. It takes therefore more time and money to maintain these systems. The maintenance effort is strongly related to the quality of the implementation during the development phase. Providing qualitative numbers to developers about their previous im- plementations could help increase the quality of their next implementation. In this thesis an approach is presented both for gathering internal software qual- ity metrics that are related to a system’s maintainability and also for extracting information from these metrics. The extracted information is then returned as feedback to the development team to give them the ability to improve their source code. This in turn will increase the virtuous circle of improved main- tainability which again results in better software quality overal. This is done via a self-made feedback reporting tool which is described in detail. Three projects have been followed both before and after developers got access to our feedback mechanism. Afterwards, we evaluate the situations.","Software Quality; Metrics; Awareness","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software and Computer Technology","","Software Engineering","",""
"uuid:4e407862-1353-4ad5-ae57-6de8e4a853f4","http://resolver.tudelft.nl/uuid:4e407862-1353-4ad5-ae57-6de8e4a853f4","Blending Agile Scrum & Offshore Outsourcing Software Development","Sanabria Laporte, A.","Bouwman, H. (mentor)","2012","In today’s highly competitive environment, it becomes increasingly common for many companies to face internal and external pressure for delivering their products and services with the lowest cost and/or fastest time possible. Information Technology Outsourcing (ITO) has helped to release part of such pressure by enabling companies to gain cost advantage and obtain access to qualified labor pools (Manning, Massini, & Lewin, 2008). Not too far from this topic is Agile Software Methodologies, which came into place by promoting continuous releases and close customer participation along the software development process. Agile Scrum has been defined as the most mature and widely adopted method from all the Agile methodologies (Hossain, Babar, & Paik, 2009). Further investigations suggest that Agile Scrum, in collocated development, can increase productivity up to 5-10 times higher than the industry average and empirical evidence proposes that non-collocated teams can reach the same performance (Sutherland, Schoonheim, & Rijk, 2008). Blending Offshoring Outsourcing projects with Agile Scrum, seem to be the perfect combination for many companies. However, companies continue facing difficulties in reaching such promised results and the mechanisms behind the ability of Agile Offshoring Outsourcing to shorten time-to-market remain hazy. Investigations explaining why certain projects cannot reach such promising results are scarce. This problem context led us to the main research question: What criteria hinder the reduction of time-to-market in software development projects that combine Agile Scrum with Offshoring Outsourcing? To answer it, we first researched what is Agile Scrum and how software development takes place under this methodology. Our research revealed that Agile Scrum is conformed by three main actors: ScrumMaster, Product Owner, Development Team. Additionally, literature suggests that Scrum demands high levels of interaction, openness and flexibility among all its participants. The second step in our research was to understand on how Offshore Outsource project are executed through their most common problems. Literatures addressed us into five main topics: 1) Culture Differences – subdivided into National Culture and Organizational Culture, 2) Coordination, 3) Trust, 4) Time Zone Differences and 5) Effective Communication. Based on these theoretical concepts we built a Conceptual Model containing twelve propositions describing what criteria influence the reduction of time-to-market. Each proposition is associated to Scrum concepts and how they could stimulate or hinder the reduction of time-to-market based on the context where the project takes place. Following, we conducted a multiple case study considering a Belgium telecommunication company (the Client) and an Indian outsourcing service provider (the Vendor). The unit of analysis: projects blending Outsourcing Offshoring and Agile Scrum. The Case Study analyzed two projects: 1) Red Project. The project suffered major delays on the original schedule, thus the Vendor decided to introduce Agile Scrum to reduce such project gap. The project delivered part of its functionality and later it was halted. It is considered a major failure in their Client-Vendor relationship. 2) Blue Project. Second project in timeline. Agile Scrum practices were also introduced after the project was already started through a Vendor initiative. The project experiences neither major delays nor exceptional performance, however Client-Vendor frictions were found in the meantime. The refined Conceptual Model is described in Figure 6 7, page 78. Our findings can be summarized as: 1)The Identification of criteria influencing the reduction of time-to-market: Literature deductions supported by empirical evidence suggest the existence of at six criteria influencing the reduction of time-to-market. Such criteria is expressed in six prepositions the mentioned Conceptual Model (P1,P3,P5,P7,P9,11). 2) Implicit evolution of the Agile Scrum methodology. No scientific evidence was found tracking the implicit simplification of Scrum as a methodology, confirming the gap between practitioners and scientists with regards Agile practices. Additionally, based on the theoretical findings and empirical experience from the Case Study, we derived a set of recommendations to Clients and Vendors aiming to reduce time-to-market while conducting this type of projects. For further research, we recommend the following topics: 1) Complement the current research quantitative research techniques and also conduct case studies in other industries besides telecommunication 2) Investigate the most suitable contract agreements in Agile Scrum Offshore Outsource Projects. 3) The execution of Agile Scrum Offshore Outsource Projects involving multiple organizations and 4) Research on why Agile Scrum certifications are largely concentrated in western countries.","Agile Scrum; Offshoring Outsourcing Software; time-to-market; qualitative research; multiple case-study","en","master thesis","","","","","","","Campus only","","Technology, Policy and Management","Management of Technology","","","",""
"uuid:18e03136-f0e6-4419-bd5b-0de385982e28","http://resolver.tudelft.nl/uuid:18e03136-f0e6-4419-bd5b-0de385982e28","Investigating the usefulness of stack traces in bug triaging","Krikke, M.M.","Pinzger, M. (mentor)","2012","In software engineering, resources such as time, money and developers, are limited. Often when bugs are found in the software developed, bug triaging is used to prioritise bug reports and allocate resources to it. When the number of bugs is considerable, this will require a vast amount of time and effort. The goal of this research is to investigate the usefulness of stack traces in bug reports for the assessment of bug report properties, using existing metrics of bug reports and files, being severity, priority and time-to-fix. In order to investigate the usefulness of stack traces, a research framework and methodology are developed. Overall, we can conclude that stack traces can be used to link software artifacts. Also, stack traces can be a valuable input for prediction models, for example using metrics of related bugs and source files.","software engineering; bug reports; stack traces; bug triaging","en","master thesis","","","","","","","","2012-09-01","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:432189de-d549-43af-88ae-7fc396234aea","http://resolver.tudelft.nl/uuid:432189de-d549-43af-88ae-7fc396234aea","Compile time aanalysis for hardware transactional memory architectures","Chahar, A.","Van Leuken, T.G.R.M. (mentor)","2012","Transactional Memory is a parallel programming paradigm in which tasks are executed, in forms of transactions, concurrently by different resources in a system and resolve conflicts between them at run-time. Conflicts, caused by data dependencies, result in aborts and restarts of transactions, thus, degrading the performance of the system. In case these data dependencies are known at compile time, then the transactions can be scheduled in a way that conflicts are avoided, thereby, reducing the number of aborts and improving significantly the system’s performance. This thesis presents the Compiler insights to Transactional memory (CiT) tool, an architecture independent static analyzer for parallel programs, which detects all potential data dependencies between parallel sections of a program. It provides feedback about load-store instructions in a transaction, dependencies inside of a loop and branches, and severals warnings related to system calls which can affect the performance. The efficiency of the tool was tested on an application including different types of induced data dependencies, as well as several applications in the STAMP benchmark suit. In the first experiment, a 20% performance improvement was observed when the two versions of the application were executed on the TMFv2 HTM simulator.","GNU; GCC; compiler; plugin; transactional; hardware; memory; software; hardware; dependency; data flow; feedback; intraprocedural analysis","en","master thesis","","","","","","","","2013-02-12","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Circuits and Systems","",""
"uuid:4b44e2b8-7ab2-4415-8c16-ba774273e4ad","http://resolver.tudelft.nl/uuid:4b44e2b8-7ab2-4415-8c16-ba774273e4ad","Correlation of fracture patterns, lithology and tectonic position in Mesozoic carbonate rocks","Van Oosterhout, D.; Ravestein, T.; Nolte, H.","Bertotti, G. (mentor)","2012","Analyzing core data for fracture analysis is restricted by costs and gives limited information. Outcrop studies with the software DigiFract give a more complete set of information about fractures. The aim of this research is to find correlations between fractures patterns, lithology and tectonic position in the area around Coldigioco, Marche, Northern Apennines, Italy. In this area the current topography consists of two anticlinal structures with thrust origins. These thrusts originated during the Miocene as a result of the Corso-Sardinia-Calabria/Adria collision. The formations in this region consist of limestone and some chert/marl layers. To analyze the fracture patterns there are different ways of statistical processing, including determining fracture density, fracture orientation, fracture height and mechanical unit distributions. By correlating these statistics with lithology and tectonic location, the following can be concluded: The difference in fracture densities between the eastern and western flanks of the eastern anticline are caused by an active axial surface fold. The dense fractured western flank of the anticline is part of a deformed panel and the less dense fractured eastern flank is part of a undeformed panel of the fold. Furthermore one could conclude that the fracture pattern in the undeformed panel of the fold tends to be irregular due to the presence of an axial plane just above this panel. Parts of this panel have already been influenced by this upcoming change of bedding orientation or some parts have already passed this axial plane. The final conclusion related to tectonic position is that on the axial plane of a fold compressive stresses can cause stylolites perpendicular to the bedding. Two prominent influencing factors in lithology are the hardness of the rock and the presence of marl and chert layers. The formations of middle hardness are fractured regular. The opposite can be seen in the other, harder or softer formations. Finally one can conclude that marl and chert layers act as non-fractured or less fractured boundaries and cause bed confined fractures.","fracture pattern analysis; DigiFract software; fracture density; chert/marl layer influence; mechanical unit distribution","en","bachelor thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","Section for Applied Geology","",""
"uuid:2b238ade-cf76-4220-a407-4f3465d56dbc","http://resolver.tudelft.nl/uuid:2b238ade-cf76-4220-a407-4f3465d56dbc","Fault-Tolerant On-Board Computer Software for the Delfi-n3Xt Nanosatellite","Van den Berg, A.F.C.","Van Genderen, A.J. (mentor); Bouwmeester, J. (mentor)","2012","Fault-tolerant On-Board Computer (OBC) software for the Delfi-n3Xt nanosatellite is needed in order to minimize the risk of failures that may occur while the satellite is operating in space. Failures may be OBC specific, but failures that affect the state of the entire satellite and influence the health of the data bus may occur as well. Some failures that may occur on the I2C data bus can have a very large impact on the health of the satellite. The failure cases in which the I2C data line or I2C clock line is being pulled low for a longer period of time make communication over the I2C bus impossible. The I2C bus recovery mode that is implemented in the OBC, together with the I2C recovery mechanism that applies to the whole satellite, provides a way to resolve failure cases like these. The failure cases on the I2C bus with less disastrous impacts may result in data inconsistencies and time-outs and are handled by the OBC as well. The I2C data bus performance analysis for Delfi-n3Xt shows a bit error rate of at most 4E-9, which fulfills the requirement that specifies that the bit error rate must be 1E-6 or less. Apart from failures on the I2C data bus, failures may occur internally in the OBC hardware or software. Since the OBC controls the whole satellite, a permanent failure in the OBC hardware or software may result in a non-functional satellite. The OBC software is designed and implemented in such a way that it can not become in an undefined state for longer than 8 seconds. Besides that, the OBC assures that transfers over the I2C bus never take longer than 30ms. This improves reliablity and performance. Furthermore, clever routines that save flash memory erase cycles were designed and developed in order to increase the lifetime of the flash memory.","Delfi-n3Xt; On-Board Computer; Fault-Tolerance; Software; I2C; Satellite","en","master thesis","","","","","","","","2012-08-29","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Computer Engineering","",""
"uuid:63a0a314-08c6-4bfb-b668-d6404bb2ed83","http://resolver.tudelft.nl/uuid:63a0a314-08c6-4bfb-b668-d6404bb2ed83","Getting the right information: Understanding client's needs in the context of software development","Ustohal, J.","Bouwman, W.A.G.A. (mentor); Ortt, J.R. (mentor)","2012","","business model; business model analysis; client's needs; Agile development; software development","en","master thesis","","","","","","","","2012-08-25","Technology, Policy and Management","Infrastructure Systems & Services","","Management of Technology","",""
"uuid:005bcd14-474c-4a70-9691-a63ce1207ac5","http://resolver.tudelft.nl/uuid:005bcd14-474c-4a70-9691-a63ce1207ac5","From Necessity to Fun: Implementing User Centered Design and Brand Driven Innovation at a business to business software provider","Sosinowska, A.","Buijs, J.A. (mentor); Van der Meer, J.D. (mentor)","2012","For almost all companies it is very obvious that in order to survive and grow they need to innovate, whether the innovation concerns their product, market or the way they do things. The paradox of innovation is that on one hand the companies want to innovate (since they know it is necessary) on the other they want to do things they are used to do and naturally prefer to relay on the safe recipes that worked in the past. Since the innovation process significantly differs from regular business it requires different mindset of the whole organization. Regular business is all about developing and selling one (or more) unique idea(s) and avoiding all kinds of risks, while innovation is all about out-of-the-box thinking, breaking the rules, risk taking and above of all producing ideas and throwing most of them away. The last is in general in contradiction to operational effectiveness; it is seems simply throwing money and effort away. Innovation is also stepping out of the comfort zone, what is usually associated with a painful and stressful situation since it is an unknown territory that is to been entered and it is difficult to predict what is to be expected. Organizations that are to innovate are also stepping out of their comfort zone of known routines and working recipes, often observed reaction is resistance and falling back on known routines. The innovation process should however be a fun and rewarding process in order for it to be successful. The reward should be not only in the profit the company makes or a prospect of domination on the market but also the learning process that the organization and its employees go through. For my graduation project I have chosen to introduce the approach that changed the innovation from a painful and necessary business to a fun and rewarding process at software provider for the business-to-business market. In this report of my graduation project the process of introducing different techniques of user research and creative problem solving to the company can be found. Further a proposal of a User Centered Design Toolkit meant to help the company to be able to do user research that results in gaining of a deep knowledge of the use needs as well as help the company employees to work together is shown.","Design; User Centered Design; Innovation; Brand Driven Innovation; Business to Business; Software","en","master thesis","","","","","","","Campus only","2013-08-16","Industrial Design Engineering","Industrial Design","","Master of Science Strategic Product Design","",""
"uuid:610a8924-46b3-4c53-9fec-53463b28a9ff","http://resolver.tudelft.nl/uuid:610a8924-46b3-4c53-9fec-53463b28a9ff","Evaluating the Quality of Opponent Models in Automated Bilateral Negotiations","Hendrikx, M.J.C.","Baarslag, T. (mentor); Hindriks, K.V. (mentor); Jonker, C.M. (mentor)","2012","Automated negotiation agents are agents that interact in an environment for the settlement of a mutual concern. An important factor influencing the performance of a negotiation agent is how it takes the opponent into account. The main challenge in this aspect, is that opponents typically hide private information to avoid exploitation. In such a setting, an opponent model can help by estimating the opponent's strategy or preference profile. This work contains the first recent survey of opponent models in automated negotiation. One of the main conclusions of this survey, is that currently there is no fair method to evaluate and compare the quality of a set of opponent models. Insight in the quality of an opponent model could lead to the development of a better model. In this work we focus on a specific type of opponent models which model the opponent's preferences. Based on a detailed analysis of the factors influencing the quality of this type of opponent model, we introduce and apply two fair measurement methods to quantify the performance gain relative to not using an opponent model and the accuracy of the model. Our contribution to the field of automated negotiation is threefold; first, we provide a comprehensive survey of opponent models; second, we introduce a method to isolate the components of a negotiation strategy; finally, we construct and apply two fair evaluation methods to quantify the quality of a set of opponent models which model the opponent's preferences. Taken together, this work structures the field of opponent models and provides insight in how to improve existing models.","negotiation; software agents; opponent models; opponent modeling; machine learning","en","master thesis","","","","","","","","2012-08-17","Electrical Engineering, Mathematics and Computer Science","Computer science","","Media knowledge engineering","",""
"uuid:cf8376a2-4efb-41e7-9af9-ef20bc5cb29d","http://resolver.tudelft.nl/uuid:cf8376a2-4efb-41e7-9af9-ef20bc5cb29d","Studying the Effects of Code Clone Size on Clone Evolution","Bouma, G.","Zaidman, A. (mentor)","2012","The practice of code cloning is something every software developer has to deal with at some point. The evolution of code clones is of particular interest, because the effects of cloning code show up later in the lifetime of a project. We research the effects a clone's properties have on its evolutionary behavior. For this purpose an approach to extract the clone size information from mined software repositories is shown. Using this approach an insight can be gained into how clone sizes evolve over time, as well as whether the size has an influence on other evolutionary patterns of a clone. We present our findings and conclude that clone size influences a clone's evolution in several ways.","software evolution; code clones; code smells; software development","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Evolution Research Lab","",""
"uuid:b467aa94-76d9-4425-8ed2-4f9a0121d04a","http://resolver.tudelft.nl/uuid:b467aa94-76d9-4425-8ed2-4f9a0121d04a","System-level Fault-Tolerance Analysis of Small Satellite On-Board Computers","Burlyaev, D.","Van Leuken, R. (mentor)","2012","Commercial Off-The-Shelf (COTS) electronic components offer cost-effective solutions for the development of On-Board Computers (OBCs) in the small satellite industry. However, the COTS parts are not originally designed to withstand the space radiation environment. Traditional fault-tolerance practices rely on expensive radiation tests or are based on circuit-level knowledge which are not easily available. This work proposes a novel simulation-based statistical approach to assist the satellite designers in performing OBC fault-tolerance analysis. The presented novel approach is based on high-level system modeling and an object-oriented fault injection mechanism. Such a technique allows the comparison between fault-tolerance techniques and reveals the consequences of radiation effects in the COTS parts at early development stages. The work covers the implementation of the proposed simulation framework which includes the OBC and fault modeling. The fault models are based on the conducted radiation environment analysis. The range of software and hardware fault detection and mitigation techniques are investigated as case studies. They include time and hardware Triple-Modular Redundancy, FPGA-based memory scrubbing with Hamming encoding, and watchdog/co-processor monitoring. The case studies reveal that the proposed approach can be used to choose suitable fault-tolerance techniques, increase their efficiency, and reduce the required hardware resources. Three papers are included: - SystemC-based On-board Computer Modeling for Design Fault-Tolerance Assessment - A Simulator of On-Board Computers for Evaluating Fault-Mitigation Techniques - System Fault-tolerance Analysis of Small Satellite On-board Computers","fault-tolerance analysis; satellite On-Board Computer; Single-Event Effects; space radiation environment; system-level modeling; software-hardware co-design","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Embedded Systems","",""
"uuid:ba6c8459-2fdb-4ff0-b89b-0100fe96ed08","http://resolver.tudelft.nl/uuid:ba6c8459-2fdb-4ff0-b89b-0100fe96ed08","Managing Software Design Erosion with Design Conformance Checking","Karsidi, N.J.","Pinzger, M. (mentor)","2012","Software design erosion is a well known process; however, once it becomes noticeable it may already have progressed so far that repairing it is difficult and costly. Design conformance assessment techniques can help developers to detect – and mitigate – the effects of design erosion, before they cause problems to the long-term maintainability of software systems. Existing techniques have already been proven successful in controlled cases, but are not yet ready for widespread adoption in production environments. This thesis studies the requirements and effects in the context of a real-world production environment and serves as a step towards making design conformance assessment techniques an economically viable investment for businesses. The contributions of this thesis are: an evaluation of the maturity of existing techniques, an inventarisation of requirements that arise from business environments with respect to design conformance assessment, and the implementation of the SharpDCA prototype tool that was evaluated in an ongoing development project.","software; design; erosion; conformance; checking; assessment; static; analysis","en","master thesis","","","","","","","","2012-05-11","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:2bee12f5-22da-45fc-9576-c61808562a78","http://resolver.tudelft.nl/uuid:2bee12f5-22da-45fc-9576-c61808562a78","Dead code elimination for web applications written in dynamic languages","Boomsma, H.B.","Gross, H.G. (mentor)","2012","Dead code is source code that is not necessary for the correct execution of an application. Dead code is a result of software ageing. It is a threat for maintainability and should therefore be removed. Many organizations in the web domain have the problem that their software grows and demands increasingly more effort to maintain, test, check out and deploy. Old features often remain in the software, because their dependencies are not obvious from the software documentation. Dead code can be found by collecting the set of code that is used and subtract this set from the set of all code. Collecting the set can be done statically or dynamically. Web applications are often written in dynamic languages. For dynamic languages dynamic analysis suits best. From the maintainability perspective a dynamic analysis is preferred over static analysis because it is able to detect reachable but unused code. %Via dynamic analysis all unused files are visualized in a tree map in which you can click the boxes to open them like you open directories in an ordinary file explorer. Also an Eclipse plug-in is developed to reduce the impact of existing dead code to the development process before removing it. In this thesis, we develop and evaluate techniques and tools to support software engineering with dead code identification and elimination for dynamic languages. The language used for evaluation is PHP, one of the most commonly used languages in web development. We demonstrate how and to which extent the techniques and tools proposed can support software developers at Hostnet, a Dutch web hosting organization.","dead code elimination; software maintenance; web applications; PHP","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Computer Engineering","",""
"uuid:ed394af6-b781-4637-ada6-0dfd8b997841","http://resolver.tudelft.nl/uuid:ed394af6-b781-4637-ada6-0dfd8b997841","Investigation of the impact of cohesion on the change-proneness of Java interfaces","Pingen, R.A.","Pinzger, M. (mentor)","2012","A lack of cohesion is often associated with bad software quality, and could lead to more changes and bugs in software. In this thesis the impact of cohesion on the change-proneness of Java interfaces is investigated. Showing the existence of a relation between these concepts can lead to better change prediction models that can support software developers in defect prediction and prevention tasks. An empirical study is performed on several open source projects to test three hypotheses. The first hypothesis investigates whether cohesion metrics correlate with the number of fine-grained source code changes. The results of the correlation analysis show a correlation between two cohesion metrics and the number of changes in Java interfaces. The confounding effect of class size is a possible explanation for the correlation between the cohesion metrics and the number of fine-grained changes. This idea is investigated through the second hypothesis, which studies the correlation between the cohesion metrics and interface size metrics. The hypothesis is accepted for the same two metrics. The third hypothesis of this thesis tries to answer whether cohesion metrics can improve change prediction models based on size. By performing three different experiments with multiple classification algorithms, we have found no evidence that supports the final hypothesis. Concluding, cohesion metrics can be used to predict changes in source code. However, they are not better predictors than size metrics, and we have found no evidence to support the idea that they can improve change prediction models based on size.","cohesion; interfaces; change prediction; change-proneness; source code metrics; software quality; Java","en","master thesis","","","","","","","","2012-04-03","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:dbb29450-a3e2-46fa-b631-814b6e1d37f0","http://resolver.tudelft.nl/uuid:dbb29450-a3e2-46fa-b631-814b6e1d37f0","A Methodology for Assessing the Benefits of Software as a Service: Perspectives and benefits when delivering Enterprise Resource Planning as service within Small and Medium Sized Enterprises","Prabowo, A.H.","Hua Tan, Y. (mentor); Janssen, M. (mentor); Barjis, J. (mentor)","2012","Software as a Service (SaaS) has been utilized as a means to deliver an Enterprise Resource Planning (ERP) system since the last decade. This software delivery model enables Small Medium Sized Enterprises (SMEs) to outsource the system from vendors based on pay-per-use or pay-per-period without having to do prior investments. Yet, SaaS model is still immature in concept and the unforeseeable uncertainty is relatively high because of different application specificity and behavioral acceptance for adopting a SaaS-based application. In the field of ERP and SaaS, there is no literature, which discussed about what benefits can be achieved from adopting a SaaS ERP system. Moreover, differences in opinion about SaaS benefits have emerged from the perspectives of critical actors, e.g. an agreement toward cost-savings and time-savings among vendors and adopters. This research explores the benefits of delivering SaaS ERP system within SMEs by considering three perspectives, which are the perspective of vendors as SaaS providers, the perspective of SMEs as SaaS adopters, and the perspective of lifecycle process of SaaS. In order to accommodate differences which might arise when conceptualizing these perspectives, each perspective is directed to a set of common goals and benefits, namely the benefits of Service Oriented Computing (SOC). For this reason, we developed a methodology for assessing benefits of SaaS that consists of three main steps, which are understanding, conceptualizing, and estimating the benefits of SaaS. Research data was collected from interviews and questionnaires to appreciate benefits from the three perspectives. Following this, the structures of SaaS benefits were conceptualized for each perspective. Then, we estimated the benefits of SaaS qualitatively by indicating such benefits whitin a case study, and quantitatively by applying Partial Least Squares (PLS) Path modeling method within which the structures of SaaS benefits were developed through SmartPLS software. Our findings show that the proposed methodology can be used to realize the benefits of SaaS in more structured way through the models of SaaS benefits that accommodate the three perspectives. Although the models are weakly validated due to limited research data, rich descriptions of SaaS benefits pertaining to the delivery of SaaS ERP system within SMEs can be gained from the structures of SaaS benefits. Furthermore, a sound understanding to overcome differences in opinion about SaaS benefits can be expected through this comprehensive methodology since each perspective in realizing the benefits is directed to the achievement of the benefits of SOC. Yet, we suggest further research for the task of validating the models with more reliable research data. The comprehensive methodology can be improved by enhancing variables and factors that define the benefits and can be used as a best practice to improve readiness of adopting a SaaS-based application in general, and a SaaS ERP system in particular.","Enterprise Resource Planning (ERP); Software as a Service (SaaS); Service Oriented Architecture (SOA); Benefits of Service Oriented Computing (SOC); Small Medium Sized Enterprises (SMEs)","en","master thesis","","","","","","","","","Technology, Policy and Management","Engineering and Reflection","","ICT","",""
"uuid:7ab39a28-2902-4811-8efc-c4645d656e74","http://resolver.tudelft.nl/uuid:7ab39a28-2902-4811-8efc-c4645d656e74","Identification and analysis of point scatterers in time series InSAR","Van der Torren, P.T.","Hanssen, R.F. (mentor); Esfahany, S.S. (mentor)","2011","In this study methods are developed for improved analysis and processing of PSI data. PSI, or radar interferometry, makes it possible to use satellite radar images to measure deformation of the Earth’s surface and objects on it with millimetre accuracy. However, interpretation of the measurements and identifying the actually measured objects is still a common problem. There are no dedicated tools available for validation, for finding both falsely detected and falsely rejected points, or for deeper analysis of PSI results. Existing algorithms for automatic coherent scatterer selection need a lot of acquisitions to obtain reliable results, which makes it necessary to collect data for many months or years before processing can be done. In this study a suite of tools is developed that facilitate detailed analysis of results and versatile processing of radar data. This suite consists of a visual inspection tool, and a toolbox that handles metadata and can do versatile processing of radar data. Furthermore a method is developed for reliable point scatterer selection, that works for a small number of acquisitions, among other improvements.","satellite; remote sensing; subsidence; deformation; InSAR; radar interferometry; software; scatterer; signal processing; time series; Delft train tunnel","en","master thesis","","","","","","","","","Aerospace Engineering","Mathematical Geodesy and Positioning","","Geomatics","",""
"uuid:e70c4d92-43aa-41c3-8829-22270162da67","http://resolver.tudelft.nl/uuid:e70c4d92-43aa-41c3-8829-22270162da67","Evolution of the Software-as-a-Service model: The analysis from a business model perspective","Azimbayev, Z.","Ortt, R. (mentor)","2011","Software as a Service (SaaS) model allows subscription to a wide variety of application services that are developed specifically for and delivered over the Internet on an as-needed basis without the need to install and manage third-party software in-house. According to Salesforce.com Inc the wide adoption of SaaS model will eventually lead to the end of on-premise software era. Currently the success of SaaS model goes hand-in-hand with popularity of cloud computing. For instance, recently Google in collaboration with Samsung introduced to the mass-market their Chromebook with cloud-based operational system Chrome OS on board, which is also delivered as a service. Nevertheless, the idea of outsourcing the software or hardware is not new. Before SaaS there was Application Service Provider (ASP) model that in the past considered being very promising as well, but failed to meet the requirements of the wide market and serves a niche market today. The interesting fact is that ASP and SaaS models are very similar and some authors even don’t make distinction between them. However, we believe that there are differences between them that affected the adoption of the models. Thus, we have set two objectives for this study. First is to conduct comparison analysis of two software delivery models from business model perspective and study factors that possibly affected the adoption. Second is to identify components of the business model that require changes in order to make a shift from one software delivery model to another and barriers that hamper these alterations. Therefore this research raised the following main research question: “Why is Software-as-a-Service model more successful today than the model applied by Application Service Providers?” In order to answer the main research question above, we have formed four sub-questions, answers for which were used as a foundation for the final analysis. We have adopted two theoretical frameworks to answer research questions. Firstly, in order to get an overview of the past and current state of the cloud computing industry, the diffusion and development patterns theory was used. Secondly, for the purpose of software delivery models comparison we took the STOF business model framework through which it was possible to highlight differing components of SaaS and ASP. Subsequently, we have formed and validated six propositions through two completely independent methods – case studies and expert interviews. The results showed that business models of SaaS and ASP differ in four components: Technical Architecture particularly Software Architecture, Pricing, Cost and Market Segment. As predicted the most important component turned out to be the Technical Architecture that practically co-determined differences in other aspects. We confirmed that the Technical Architecture of the SaaS model is better in reaching ‘economies of scale’ compared to the ASP model, therefore the SaaS model was able to cut costs, drop service prices and serve wider markets which positively contributed to the large-scale diffusion of the model. 2 Evolution of the Software-as-a-Service model Furthermore, a set of recommendations for managers on ways of switching from one software delivery model to another were formed. We also conclude that although the SaaS has more advanced software architecture that makes it more successful on the market, it is still not a perfect solution for all types of companies. Mainly because of security issues that multi-tenant architecture entails, the enterprise application delivered over the ASP model could be a solution for certain market niches dealing with sensitive data, lacking IT expertise and willing to pay extra for the service. Therefore before committing to a certain model, providers have to carefully consider which type of companies they are able and willing to serve. It is also very important for companies that already have large installed base of customers and legacy software with single-tenant architecture. We found that virtualization technologies are rapidly developing and practically enable single-tenant applications to “fake” multi-tenancy and run on comparatively high levels of resource utilization as the SaaS model. Therefore the first recommendation for the traditional enterprise software providers would be to decide whether they are willing to maintain their current market segment or capture a wider market of SMEs and even individual end-consumers. Based on that decision, the most suitable architecture could be chosen. However, this study has some limitations among which the most serious is the lack of attention to external macro economical factors that could play an important role in diffusion and development of the innovation. Although, to partially cover that topic was done, it still requires more careful and extensive research. Therefore suggestion would be to consider it as an opportunity for the further research in this field.","Software as a Service (SaaS); Application Service provider (ASP); Cloud computing; Diffusion and development pattern of high-tech innovation; Business model","en","master thesis","","","","","","","","2011-12-01","Technology, Policy and Management","Technology, Strategy and Entrepreneurship","","Management of Technology","",""
"uuid:67512794-9b10-4ef7-997f-5667bd83fe2d","http://resolver.tudelft.nl/uuid:67512794-9b10-4ef7-997f-5667bd83fe2d","Software Archivering met Emulatie","Van Dam, M.C.; Van Egmond, J.A.","Van Nieuwenhuizen, P.R. (mentor)","2011","Veel culturele en onderzoeksdata wordt tegenwoordig gearchiveerd. Bij zo'n archief is het van belang dat het op lange termijn nog meegaat, dus dat technologische veranderingen het archief niet ontoegankelijk maken. Voor software en andere dynamische data levert dit speciale problemen op, aangezien het meer afhankelijkheden heeft dan normale data. In dit onderzoeks- en implementatierapport is Emulatie onderzocht als archiveringsstrategie binnen DANS (Data Archiving and Networked Services). Hiernaast is een prototype-beheerapplicatie ontwikkeld om het mogelijke beheerproces inzichtelijk te maken binnen DANS.","DANS; Data-archivering; Emulatie; Software-archivering","nl","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:cf41d6dd-63ae-42c7-8fab-a666f7b229a7","http://resolver.tudelft.nl/uuid:cf41d6dd-63ae-42c7-8fab-a666f7b229a7","Een verkennend onderzoek naar de implementatie van een radarsysteem op het USRP1 met GNU Radio","Peters, R.Y.; Liu, X.","Van der Veen, A.J. (mentor)","2011","In tegenstelling tot conventionele radar -en radiosystemen kan de toepas- sing van een Software Defined Radio (SDR) grotendeels bepaald worden in software. Een SDR bestaat uit een RF-frontend, een paar ADC/DAC’s, een FPGA en een host computer. In de hostcomputer en de FPGA kan de functie van een SDR volledig worden geherdefini ?eerd, zonder dat daar een aanpassing in de hardware voor nodig is. Het USRP1 is een specifieke implementatie van het hardware gedeelte van een SDR. Het USRP wordt aangesloten op een dochterbord dat de bepaald op welke frequenties gewerkt kan worden. In combinatie met GNU Radio voor het software deel van de SDR vormt dit een aantrekkelijk onderzoeksplatform om de mogelijkheden van SDR voor radar -en radiosystemen mee te verkennen. GNU Radio is een uitgebreid softwarepakket waarmee alle opties van het USRP benut kunnen worden. Met behulp van relatief makkelijk te doorgronden pythonscripts is het mogelijk om SDR’s te bouwen. Met GNU Radio Companion kan bovendien zelfs zonder enige code te schrijven gebruik worden gemaakt van de meeste mogelijkheden van GNU Radio. In dit onderzoek werd een enkele USRP1 in combinatie met een RFX2400 dochterbord gebruikt om de basis te leggen voor een eenvoudig radarsysteem. Het hardware-ontwerp van het USRP1 legt hierbij een aantal beperkingen op aan de implementatiemogelijkheden. Het voornaamste probleem is de gebrekkige isolatie tussen de verzender en de ontvanger. Daarnaast schiet de bandbreedte van de USB-interface te kort. Uiteindelijk is wel de basis gelegd voor een Frequency Modulated Continuous Wave (FMCW) radar. Op een enkele USRP werkt dit systeem echter niet goed, vanwege de eerdergenoemde isolatie problemen. Het gebruik van meerdere USRP’s kan de isolatieproblemen oplossen. Het probleem van de beperkte interface-bandbreedte kan worden verholpen door het gebruik van het USRP2, een verbeterde versie van het USRP1.","USRP; Software Defined Radio; Radar; GNU Radio","nl","bachelor thesis","","","","","","","","2011-08-01","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Circuits and Systems","",""
"uuid:fe76399c-0370-4293-af86-1e6c44dedbd1","http://resolver.tudelft.nl/uuid:fe76399c-0370-4293-af86-1e6c44dedbd1","Strategic Design Framework towards sustained reduction of residential electricity consumption in Emerging Market Economies: The case of Brazil and China","Modesto, R.B.","Diehl, J.C. (mentor); Christiaans, H.H.C.M. (mentor)","2011","Energy is considered by the United Nations as central to sustainable development and poverty reduction affecting all its aspects - social, economic, and environmental - including livelihoods, access to water, agricultural productivity, health, population levels, education, and gender-related issues. As such, energy can be seen as a positive stimulus for sustainable development. However, it is expected that the world population will increase from 6 billion to 9 billion people in 2025. In parallel, a large part of the world population is moving from poverty to middle class. This combination of an increase of population and income may lead to a remarkable rise in purchase of consumer electronic products and consequently energy consumption. If changing efforts are not taken, the increase in residential energy consumption in emerging economies will bring the world to severe environmental and social problems such as greenhouse effect, lack of access to affordable energy resources and air pollution. If kept the usual track for growth, the environmental or economic crisis is still to come. To have a numerical reference, Lomborg (2007) states that the developing world that now responds annually for about 40% the global carbon emissions, is likely produce 75% by the end of the century. The main question addressed by this research project is “How to reduce the energy consumption of consumer products by residential consumers in emerging economies by means of product alterations (hardware) and change in consumer behaviour (software)”. From this perspective at first instance for BRICs, exemplified by Brazil and China (large emerging market economies), a literature research was carried out after the current and expected increase of residential energy consumption. In parallel, present actions to reduce energy consumption were mapped and critically analysed. The final outcome is a strategic conceptual framework that integrate the actors and drivers of consumption to design energy efficient products in promotion of energy efficient (hardware) and contextualized behaviour (software) in order to reduce residential electricity consumption in emerging economies.","Design for Sustainability; Emerging Market Economies; Energy Consumption; Electricity Consumption; Strategic Design Framework; Brazil; China; Hardware; Software; Behavior; Innovation","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Design Engineering","","Master of Science Strategic Product Design","",""
"uuid:cff6cd3b-a587-42f2-a3ce-e735aebf87ce","http://resolver.tudelft.nl/uuid:cff6cd3b-a587-42f2-a3ce-e735aebf87ce","Constructing a Test Code Quality Model and Empirically Assessing its Relation to Issue Handling Performance","Athanasiou, D.","Zaidman, A.E. (mentor); Visser, J. (mentor); Nugroho, A. (mentor)","2011","Automated testing is a basic principle of agile development. Its benefits include early defect detection, defect cause localization and removal of fear to apply changes in the code. Therefore, maintaining high quality test code is essential. This study introduces a model that assesses test code quality by combining source code metrics that reflect three main aspects of test code quality: completeness, effectiveness and maintainability. The model is inspired by the SIG Software Quality model which aggregates source code metrics into quality ratings based on benchmarking. To validate the model we assess the relation between test code quality, as measured by the model, and issue handling performance. An experiment is conducted in which the test code quality model is applied on 18 open source systems. The correlation is tested between the ratings of test code quality and issue handling indicators, which are obtained by mining issue repositories. The results indicate a significant positive correlation between test code quality and issue handling performance. Furthermore, three case studies are performed on commercial systems and the model's outcome is compared to experts' evaluations.","Test code quality; Issue handling; Software engineering; Software testing; Metrics","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Computer Science","",""
"uuid:45eb9ddb-67a8-4e9b-ad87-5fff58989d45","http://resolver.tudelft.nl/uuid:45eb9ddb-67a8-4e9b-ad87-5fff58989d45","Capturing Value from Platform-as-a-Service Technology: Platform-as-a-Service Adoption Model for Large Enterprises","Primagati, A.","Tan, Y. (mentor); De Reuver, M. (mentor); Warnier, M. (mentor); Nikayin, F. (mentor); Wegbrands, P. (mentor)","2011","Cloud computing market is expected to grow rapidly in the next five years. Even though Software-as-a-Service and Infrastructure-as-a-Service solutions will dominate most of the market, Platform-as-a-Service solution is forecasted to have the fastest-growing segment, especially in Western Europe. On the other hand, cloud computing is also argued to be the new playing ground for Telecommunication industry. As the industry facing a threat for just being the network “bit piper”, telecom operators might find a new revenue source within the cloud computing domain. The issues explained above have led us to conduct a research that aimed at two objectives. First is to give guidance to larger enterprise companies in the Netherlands on how to capture values from PaaS offering. Second, this research aimed to identify whether telecom companies could create particular new business value and opportunities from the PaaS service provision. Therefore this study raised a challenging main questions: “What factors are important for large enterprise clients to adopt Platform-as-a-Service in order to capture the business value that is offered from Platform-as-a-Service offering, and what are the strategic implications of the Platform-as-a-Service adoption for Telecom industry?” To answer the main questions, five research sub-questions were formulated. A theoretical approach from business model literature is used to guide answering the main research questions. A conceptual model of PaaS adoption factors was developed, which is based on the STOF business model framework. Analysis of PaaS market in the Netherlands was conducted among 9 client companies and 3 PaaS provider companies, in order to validate the conceptual model. The results showed that the PaaS market in the Netherlands are still immature, thus making the validation of the conceptual model for PaaS adoption hard to be generalized. Modification of the analysis approach was taken that enable us to analyze the conceptual model for SaaS adoption as well. Our study found that there are several factors within the Service domain that are important for clients when they consider which applications to be moved to PaaS or SaaS environment. The most important factor is criticality of application that wants to be moved to the cloud. These factors from the Service domain influence client’s technical requirements on Security, Quality of Service, and System Integration issues. Furthermore, our research found that there are several factors that client take into account when they about to choose PaaS and SaaS providers. These factors serve as assessment framework whether the providers’ profile can meet clients’ requirements. Our finding shows that Branding is the most important factor of PaaS and SaaS providers. Branding is important because it is closely associated with trustworthiness. Moreover, trust also becomes really important, because in PaaS and SaaS implementation clients are losing control of their application data. In addition, our finding also shows that the factor of data location is really important in PaaS and SaaS adoption. A key stakeholder, namely the regulators, drives this data location factor. We also found that clients perceived system integrators and network providers as the other important key stakeholder in the provision of PaaS and SaaS services. Furthermore, our study also found that telecom companies could play two roles in cloud computing domain in order to create new business value, and hence making new revenue source. The first role is to be Cloud service provider. This role means that telecom companies could exploit their main resources of Internet network infrastructure to deliver a guaranteed and high level end-to-end cloud services (both PaaS or SaaS services). The second role is to be Cloud service broker. This role exploits telecom companies’ resource of broad customer base and customer relationship in order to mediate cloud providers and the clients. Being cloud service broker enables telecom companies to gain monetary incentives by means of revenue sharing. Our study concludes that clients need to take into account factors in the four domains of Service, Technology, Organization, and Finance in order to capture the value of PaaS and SaaS offering. On top of that, clients need to take into consideration the regulation applied in order to legally implement cloud computing solutions. Clients also need to acknowledge the importance of system integrators and network providers in utilizing cloud computing services. This study adds several main contributions to the literature. Firstly, this study has made the first attempt to construct an adoption model for PaaS and SaaS specifically. Secondly, this study has contributed to business model theory, particularly by extending the applicability of STOF model. Thirdly, this study confirms the existing literature that describes key stakeholders on cloud computing. On top of that, we contribute by adding Network Provider as another key stakeholder. Nevertheless, this study bears some limitations. Due to small sample size, this research serves as means of first validation of the constructed model. We are aware that further research can be conducted to do more extensive validation by means of survey research. However due to the immaturity of cloud computing market, especially PaaS, we suggest that further research on PaaS is better to be conducted later to wait for the market to be developed. Besides the sampling issue, our research was not able to validate our findings on telecom companies role in cloud computing. Further research could focus on validating our results empirically through involvement of industry experts.","cloud computing; Platform-as-a-Service; Software-as-a-Service; business model; market adoption; telecommunication industry; adoption factors","en","master thesis","","","","","","","","","Technology, Policy and Management","Information and Communication Technology","","Management of Technology","",""
"uuid:9af1350d-f3c4-4311-a98a-40fac5c343f1","http://resolver.tudelft.nl/uuid:9af1350d-f3c4-4311-a98a-40fac5c343f1","Aiding Software Developers to Test with TestNForce","Hurdugaci, V.","Zaidman, A.E. (mentor)","2011","Regression testing is an expensive process because, most of times, all the available test cases are executed. Many techniques of test selection/filtering have been researched and implemented, each having its own strong and weak points. This paper introduces a tool that helps developers and testers to identify the tests that need to be executed after a code change, in order to check for regressions. The implementation is based on dynamic code analysis and the purpose of the tool is to eliminate the time spent on testing using inappropriate test cases (tests that bring no value in checking for regressions). The adequacy, usability and completeness of this tool have been evaluated through the meanings of a user study. During the study, a number of developers used the tool and expressed their opinion about it through questionnaires.","software evolution; software engineering; software testing; software maintenance","en","master thesis","","","","","","","","2011-07-19","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:6e02be89-3d5a-4207-a449-ca14eff30231","http://resolver.tudelft.nl/uuid:6e02be89-3d5a-4207-a449-ca14eff30231","Evaluating the Lifespan of Code Smells in a Software System using Software Repository Mining","Peters, R.R.","Zaidman, A.E. (mentor)","2011","An anti-pattern is a commonly occurring solution that will always have negative consequences, when applied to a recurring problem. Code smells are considered to be symptoms of anti-patterns and occur at source code level. The lifespan of code smells in a software system can be determined by mining the software repository on which the system is stored. This provides insight into the behaviour of software developers with regard to resolving code smells and anti-patterns. This thesis presents a custom built application that computes the lifespans of certain types of code smells in a software repository. As a case study, this tool is applied on seven open source systems in order to answer research questions concerning the lifespan of code smells and the refactoring behaviour of developers. The results of this study reveal that engineers are aware of code smells, but not very concerned with their impact, given the low refactoring activity. Finally, several suggestions are given to further develop the application and to extend the work done in this thesis.","Software evolution; Code smells; Software repository mining","en","master thesis","","","","","","","","2011-07-13","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:433f8d51-0052-47f5-aca5-f0de56f642fa","http://resolver.tudelft.nl/uuid:433f8d51-0052-47f5-aca5-f0de56f642fa","Bing Forecast","De Bokx, R.; Gravendeel, L.; Krause, M.","Sodoyer, B.R. (mentor); Guy, D. (mentor)","2011","Bing Technology, a Philadelphia, USA based software company, seeks to develop a software framework that can be used to create forecasts for a wide range of predictive domains. In particular, they would like to create an application of this framework that is able to perform stock market forecasting. The goal of our project is to develop an innovative algorithm to perform data prediction, and to apply this algorithm in a stock market forecasting application. The application should generate investment strategies which can be evaluated to output an optimal investment portfolio. We have achieved this goal by making use of a variant of Genetic Programming to create strategies which are represented internally by a tree containing ""modules"". Each module performs a specific function; examples include simple numerical parameters, statistical functions and technical indicators. The backbone of the application is written in Java, communicating with a web-based PHP front-end through a MySQL database. The front-end allows the user to create jobs for the back-end to process, as well as view the resulting strategies and related statistics. Test runs have shown a correlation between in-sample and out-of-sample performance. Further, we have determined with high certainty that there is a predictable relation between the complexity of a test run and the duration of the run. There are several features and changes we would like to see in future development of the product. Several program parameters can be optimized further, and there are more modules to implement that could boost the performance of the generated strategies. Further, the data provided to the modules can be improved by including qualitative data about the company, or industry aggregative data. Other improvements could come from implementing support for short selling and more accurate transaction costs. On the user side, we would like to give investors more control over the strategy building process by allowing them to create templates on which new strategies have to be based.","forecasting; stock market; investing; software; genetic programming","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:3e7839c5-d2f4-4f75-8495-ad2d818524a4","http://resolver.tudelft.nl/uuid:3e7839c5-d2f4-4f75-8495-ad2d818524a4","Banking 2.0: Developing a Reference Architecture for Financial Services in The Cloud","Bucur, A.","Hidders, J. (mentor); Zubcevic, R. (mentor)","2011","Information technology is the common denominator for all the industries in the 21st century, therefore any important change in this area is prone to have an impact on small and large businesses alike. The latest shock wave storming through IT is Cloud computing. Due to the importance and sensitive nature of applications used by financial institutions, the main goal of this research is to investigate how Cloud computing could change the way services are provided to customers and what is the emerging role of IT consultancy companies for this specific market segment. In order to do so, a proposed reference architecture has been created based on existing models and services in combination with the opinion of various experts from Capgemini and financial institutions. The impact of the proposed model, ""Capgemini Immediate for Financial Institutions"", has been expressed from a business and an IT perspective. Also, its functionality has been showcased in a scenario meant to underline the impact of the proposed changes on the boundaries of the system and the interaction of the financial institution with other entities. This process has been evaluated and supervised by experts from Capgemini in order to meet the standards used in the industry.","Cloud Computing; Software as a Service; Financial Services; Reference architecture","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Science","","Information Architecture","",""
"uuid:cde5bb4c-62be-455a-b17f-b24ae40f73b9","http://resolver.tudelft.nl/uuid:cde5bb4c-62be-455a-b17f-b24ae40f73b9","Underbalanced drilling operations: Friction loss modeling of two phase annular flow","Van der Sluijs, I.L.","Godhavn, J.M. (mentor); De Blok, G.L.J. (mentor); Jansen, J.D. (mentor)","2011","This project develops a software tool to model pressure loss of two phase flow in the annulus of a well during underbalanced drilling. By adjusting the Mukherjee and Brill correlation for production/injection wells, insight into which parameters are of influence in predicting the frictional pressure drop during underbalanced drilling is gained. Also the difference between the use of oil-base mud or water-base mud is presented. Underbalanced drilling is the oldest drilling method which over the past years received new attention for bringing new life to an old reservoir. With no reservoir impairment, this method can achieve a higher recovery factor if completed 100% underbalanced. For the success of an underbalanced drilling operation, understanding the annular frictional performance of non-Newtonian mud is crucial. This is a key factor in the development of the hydraulic program which is used in the selection of the drilling equipment. Although several simulators exist, none of them accurately predicts the pressures which are experienced in reality. In this project a power-law model for predicting frictional pressure loss in eccentric annulus is used instead of the formulas defined by Mukherjee and Brill. After selecting the parameters that have a potential impact on the frictional pressure loss, a range for each parameter was defined and a sensitivity analysis was performed to quantify the impact due to changes in each parameter.","underbalanced drilling; two-phase annular flow; software tool","en","bachelor thesis","","","","","","","","","Civil Engineering and Geosciences","Applied Earth Sciences","","Petroleum Engineering","",""
"uuid:69c087e7-0e98-45cc-8737-1016cba7d745","http://resolver.tudelft.nl/uuid:69c087e7-0e98-45cc-8737-1016cba7d745","A Service Oriented Architecture Solution for Gaming Simulation Suites","Van Nuland, B.","Zaidman, A.E. (mentor); Van Deursen, A. (mentor); Verbraeck, A. (mentor)","2011","Serious Gaming is becoming a popular method for training and problem solving in companies. One of the companies who has taken an interest in this is ProRail. Together with the faculty of Technology, Policy and Management of the Delft University of Technology they started a project to develop a gaming simulation suite for training and decision making purposes, called the Railway Gaming Suite. In order to connect the games and simulators of the suite a solid architecture is needed. Three architectures were picked out to see if they are suitable for this, namely: Service Oriented Architectures, High Level Architecture and FAMAS Simulation Backbone. Using the Railway Gaming Suite as a case study, we have extracted requirements (like performance and flexibility) for an architecture for gaming simulation suites using the Architectural Trade-off Analysis Method. These requirements are used to determine the suitability of the three architectures. In this thesis the research on the suitability of Service Oriented Architectures (SOA) is presented. A prototype SOA was created, called Service Oriented Gaming and Simulation (SOGS). This prototype was used to test the performance requirement for the evaluation. The suitability was investigated by evaluating SOA to see if it is able to support the requirements we found. We subsequently also compared the suitability of the other architectures. Intermediate results of this thesis project were used to help with the decision for selecting an architecture for the Railway Gaming Suite.","serious games; software engineering; software architecture","en","master thesis","","","","","","","","2011-05-11","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:c8e7e62e-8dbb-42a7-b1f9-7be4b50b2416","http://resolver.tudelft.nl/uuid:c8e7e62e-8dbb-42a7-b1f9-7be4b50b2416","How Dassault Systèmes Benelux can enhance the acceptance of its PLM software solutions in product design offices?","Sazci, T.O.","Horváth, I. (mentor); Schoormans, J.P.L. (mentor)","2011","Dassault Systèmes (“DS”) is a leading company focused on 3D design and Product Lifecycle Management (“PLM”) software. PLM is the process of managing the entire lifecycle of a product starting from its conceptual creation, through design and manufacture, to its service and disposal. PLM integrates people, data and processes with business systems and provides the product information backbone for companies and their extended enterprises. The solutions offered enable client companies to introduce innovative products of higher quality into the market in a time and cost efficient way. Although PLM software is very popular among large firms such as car manufacturers and airplane factories, DS aims at targeting smaller firms, especially product design offices, in order to increase the market share of its solutions. In a market where the competition is getting more aggressive each year, DS needs to elaborate a new strategy to render it better equipped against its competitors. DS is addressing 11 different industries worldwide. Since the company intends to cover the whole life cycle of products, it is important to consider all participants influenced by the offered PLM solutions, in order to enhance productivity. The small product design offices, which define the life cycle of products, are typically using PLM software programs that are neither complex nor efficient enough. 3D design is carried out by competitors’ mostly through lower scale and less complex software tools. The purpose of this project is to detect obstacles in utilization and to recommend the right course of actions to enable a better rate of acceptance for the PLM software. During the execution of the project, the marketing strategies and organization of DS and the content of the offered software solutions will be investigated in order to improve the acceptance and adoption of the PLM software solutions by small product design offices. The project will be elaborated from three different perspectives and will be based on market in the Benelux countries which is the problem owner’s area of responsibility. Below is the general outline with respect to the perspectives within the project: (i) The first perspective is the marketing strategy and product development policy of DS itself. The structure of the offered PLM Software and marketing related issues will be analyzed within the company. (ii) The second perspective is the situation of product design offices in the Benelux and their motives in choosing certain products and possible modifiers of their acceptance. It is known that the demand from these design offices depend very much on their role in and their relationships to a product design process. (iii) The third perspective is the value added resellers in Benelux which are the companies that sell existing software and add its own value by delivering services such as trainings, technical support, methodologies and customizations. The activity of these resellers in and their motivation towards the PLM system application will be the main concern. In the analysis of these three issues, the efficiency of communication and the flow of information between all these parties will be analyzed in order to understand whether the appropriate language is being used. The general objective of the project is to identify significant factors that have a major influence on the company’s commercial and technical activities. The project will start with a situation analysis to understand the market and its dynamics. The situation analysis will consist of (i) a market research (ii) a competitor research and (iii) an internal company research including the product portfolio and the business model. The second stage will be an extensive literature research on PLM software acceptance in the context of product design offices to understand and discover the influencing factors of acceptance. The next stage will investigate the design market in the Benelux countries. The number of offices active in the field, their domain of activity, and their involvement in different tasks of design processes will be analyzed. During this investigation a series of interviews will be conducted by focusing on the previously elaborated acceptance factors. Then, this information will be used to understand the dynamics of design processes, the needs of designers and their expectations towards comprehensive PLM solutions. The conclusions will be used to create a strategy to improve acceptance of DS’ PLM products in Benelux product design offices. This strategy will contain recommendations which may cover new product proposals, a new system, a way of understanding and a marketing strategy.","design; design offices; software; CAD; strategy; market","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Design Engineering","","Master of Science Strategic Product Design","",""
"uuid:62139b46-2ce9-40c7-b77d-3ccc88b1a6a5","http://resolver.tudelft.nl/uuid:62139b46-2ce9-40c7-b77d-3ccc88b1a6a5","Knowledge Network","Boekesteijn, J.; Broersma, B.W.","Manuel, B.A. (mentor); Geers, H.J.A.M. (mentor); Sodoyer, B.R. (mentor)","2011","Binnen Tam Tam is kennis van de medewerkers verspreid opgeslagen in verschillende informatiesystemen. Het doel van het bedrijf is dat al deze kennis vanuit een centraal punt effectief kan worden doorzocht. Dit centrale punt is het SharePoint-platform, een product van Microsoft. Voor dit project is SharePoint uitgebreid zodat het mogelijk wordt om alle informatie te voorzien van tags en ratings. Gebruikers kunnen deze extra metadata gebruiken bij het zoeken in het systeem. Aan het begin van dit project is er een analyse gemaakt van de huidige systemen die binnen Tam Tam worden gebruikt om informatie te delen. Aan de hand van de resultaten van deze analyse is er een ontwerp gemaakt voor de uitbreiding op SharePoint. Dit ontwerp is geïmplementeerd en getest in de SharePoint-omgeving. Uiteindelijk is er binnen 12 weken een werkend prototype opgeleverd, dat door eindgebruikers is getest. Wij hadden voor de start van dit project geen ervaring met het SharePoint-platform. Aan het begin van het project is er daarom veel onderzoek gedaan door het schrijven van kleine tests. Dit onderzoek werd bemoeilijkt doordat bepaalde delen van het SharePoint-platform niet of onvolledig zijn gedocumenteerd. Delen van de geschreven tests zijn gebruikt in de uiteindelijke versie van ons prototype. Het ontwerp van sommige onderdelen van het systeem is hierdoor pas achteraf ontstaan.","tam tam; knowledge network; sharepoint; bachelorproject; tag cloud; tagging; rating; software engineering","nl","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:381f7d06-e79b-46be-982a-a9afb0b96247","http://resolver.tudelft.nl/uuid:381f7d06-e79b-46be-982a-a9afb0b96247","Hierarchy in Meritocracy: Community Building and Code Production in The Apache Software Foundation","Castaneda, O.F.","Van Eeten, M.J.G. (mentor); Scholten, V.E. (mentor); Van Wendel de Joode, R. (mentor)","2010","This research is about code production in top-level open source communities of The Apache Software Foundation (ASF). We extensively analyzed Subversion repository logs from 70 top-level Apache open source projects in the ASF from 2004 to 2009. Based on interactions in code production during one-year periods we constructed networks of file co-authorship that gave us access to the organization of Apache open source communities. This allowed us to measure graph level properties, like hier- archy and clustering, and their influence on the outputs of code production. Apache communities are groups of individuals that organize their code production efforts in order to develop enterprise-grade open source software. The ASF explains the success of its communities and the software they produce by claiming to have instituted a meritocracy that brings contributors together in a way that significantly influences code production, namely by building communities instead of only focusing on technical properties of the source code like modularity. Self-organization theory has found that the role of institutions is minor. In this research we test and confirm the theory of self-organization, and find that the meritocracy institution does not influence code production.","open source; self-organization; institutionalization; management of innovation; management of technology; software management","en","master thesis","","","","","","","","","Technology, Policy and Management","Policy, Organization, Law and Gaming","","Management of Technology","",""
"uuid:4ab25966-8291-4ad2-bed2-2962262af902","http://resolver.tudelft.nl/uuid:4ab25966-8291-4ad2-bed2-2962262af902","Component diagram recovery with dynamic analysis","Metselaar, P.A.","Zaidman, A. (mentor); Borota, N. (mentor)","2010","By evaluating the architecture of a software system, ways to improve the system's quality attributes (such as its performance and modifiability) can be identified and valuable lessons can be learned which may also be applied to other systems. An architecture evaluation requires an up-to-date description of the architecture, which is often unavailable. In such a case, reverse engineering techniques can be used to recover it. For an effective and efficient recovery and evaluation of an architecture, the scope of the recovery should be narrowed to the parts of the system that are relevant for the evaluation and the recovered architectural views should be useful for a wide range of system stakeholders. This thesis presents a case study, in which these issues are addressed by using dynamic analysis and Prolog to recover architectural views. A survey involving representatives of several groups of stakeholders was conducted to assess the usefulness of a recovered view. The results show that the approach is potentially useful, but that more work is needed to further evaluate it and to make it more usable in practice.","dynamic analysis; reverse engineering; software architecture recovery","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:b3716a4c-9d79-4795-a689-7805d56420a4","http://resolver.tudelft.nl/uuid:b3716a4c-9d79-4795-a689-7805d56420a4","Development of the Energy Transition Model: Introduction of the Object Oriented Modeling method","Van Lelyveld, W.","Bouwmans, I. (mentor); Van Daalen, C.E. (mentor); Weijnen, M. (mentor); Schoenmakers, D. (mentor); Wirtz, A.G. (mentor)","2010","Global turmoil concerning fossil fuels, questions relating to sustainable sources of energy and difficulties with sharing knowledge led to the creation of the Energy Transition Model (ETM) in the start of 2008. The ETM aims to be a transparent, comprehensive, fact-based, and independent model about energy related matters, ranging from CO2 emission levels to sustainability targets. The original ETM was developed in Microsoft Excel® until the beginning of 2010. Because of the choice for Excel as a platform for the model, it suffered some serious limitations such as versioning, limited availability, and lack of compatibility. Independent of the chosen software, the company faced issues with the high level of complexity and a lack of transparency. Due to these limitations, the plan arose to develop software that relies on open standards, and independent, open-source software, so the model could be used without depending on commercial software packages. In the first attempt to create a new application it was tried to translate the Excel calculations to the desired programming language Ruby on Rails. Due to the model’s complexity this approach proved to be fruitless quickly. The way of thinking in the software development did not match the model’s design. While attempting to clarify the design of the model, it became clear the new model had to be developed in another fashion. This resulted in the research question of this thesis: What method should be used for the further development of the ETM to fulfill Quintel’s requirements? Combining model and software development methods into one resulted in the Object Oriented Modeling (OOM) method. The modeling steps in the OOM method are based on the model cycle, and the software steps are based on a combination of Boehm’s spiral model and iterative and incremental development (IID). The OOM method has resulted in the development of a network structure of converters in the model, which the software can use in a standardized calculation. For the ETM, this converter concept uses the thermodynamic law of conservation of energy and has become basis of the model. The basic structure of the converter concept supplies the required transparency of the model, and provides the flexibility to adjust or extend the model. In conclusion, the way of thinking for both the software and the model was combined in the OOM method which resulted in the converter concept. This has led to a model that fulfills Quintel’s requirements for the ETM.","energy; transition; model; software; development; method; OOM; OOP; cycle; IID; spiral","en","master thesis","","","","","","","Campus only","","Technology, Policy and Management","Energy & Industry","","Systems engineering, Policy analysis and Management","",""
"uuid:824f5394-1d2d-470a-8e23-4f82abe8ee77","http://resolver.tudelft.nl/uuid:824f5394-1d2d-470a-8e23-4f82abe8ee77","Cellular beam-columns in portal frame structures","Verweij, J.G.","Bijlaard, F.S.K. (mentor); Romeijn, A. (mentor); Abspoel, R. (mentor); Hoogenboom, P.C.J. (mentor); Vassart, O. (mentor)","2010","Theoretical and numerical research into the application of cellular beam-column members in portal frame structures. Two failure mechanisms requiring additional research if cellular beams are to be applied as column members, have been investigated: 1) member flexural buckling, and 2) local web-post buckling. The flexural buckling behaviour of cellular columns has been shown to be similar to that of plain-webbed beams. A simplified design rule is proposed for checking the ultimate flexural buckling load capacity of cellular columns, and is shown to yield safe results. From an extensive parameter study an approximately linear relation between the web-post buckling capacity and an applied axial force has been shown to exist. Even although this influence is not effectively accounted for in the present available models for web-post buckling, these still turn out to be sufficiently conservative to be applied in column design. Results are applied to the design of a portal frame consisting entirely of cellular members, by means of a design tool developed in Microsoft Excel using VBA. This tool has been validated against 2D and 3D finite element analyses for different load cases using the finite element software SAFIR.","cellular beam-columns; web-post buckling; web openings; portal frame; steel structures; SAFIR finite element software","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Design and Construction","","","",""
"uuid:d4503d70-aab0-4e30-bc00-ce9f6ff86ac2","http://resolver.tudelft.nl/uuid:d4503d70-aab0-4e30-bc00-ce9f6ff86ac2","Supporting the Tennis Coach: Automatically Analyzing and Evaluating Tennis Footage","Kuijpers, M.J.","Hidders, J. (mentor); Houben, G.J. (mentor); Hendriks, E. (mentor)","2010","Video support is becoming an indispensable tool in tennis practice sessions, especially at the professional level. Cameras are used to record a player and the current available software is used optimize the player's tennis technique, i.e. the biomechanics. Unfortunately, the tactical side of tennis is underexposed in terms of available software. Tennis can be seen as a spatial-temporal game. The the dimensions of the court are fixed and the ball goes from player a to player b in a finite amount of time. The work presented in this thesis shows a method to exibly evaluate a tennis game based on the footage of a single mounted camera. Software is used to extract spatial-temporal data from the tennis footage and a spatial-temporal language based on first order logic is designed to query the spatial-temporal data. The implemented prototype of this thesis' work provides a graphical user interface in which the user is able to execute queries and to see the movie fragments that meet the requirements of the spatial-temporal query.","video analysis; tennis software; SQL","en","master thesis","","","","","","","","2010-10-13","Electrical Engineering, Mathematics and Computer Science","Computer Science","","Information Architecture","",""
"uuid:9bb67144-311c-4a9b-92e6-0d8b0b899d27","http://resolver.tudelft.nl/uuid:9bb67144-311c-4a9b-92e6-0d8b0b899d27","Capturing and Predicting the Integration Process of an Embedded Software Company","Reijerse, M.A.","Van Solingen, D.M. (mentor)","2010","In 2009 TomTom developed a model, called the MFM-model, which should reflect the maturity, feasibility and progression of a Personal Navigation Device software integration project. However, this model did not reflect all required aspects of the integration project and therefore was unable to correctly reflect the maturity, feasibility or progression. Furthermore, creating and maintaining this model proved to be too time-consuming. In this thesis we identify the problems of this model, propose a number of improvements to eliminate this problems and explain how these improvements have been implemented. In addition, we discuss how the model can be automatically generated from Jira and Perforce in order to reduce the required effort for creating and maintaining it. As an end result, this thesis will deliver a MFM 2.0 prototype which is an automated and improved version of the initial model. We will review this prototype by comparing survey-results taken at the initial situation and the improved situation. To further inspect this prototype, a small case-study is performed to analyze the accuracy, usage and importance of it.","feasibility; maturity; model; software","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:409dd713-b5c1-4679-bc11-4c1d88f8a2af","http://resolver.tudelft.nl/uuid:409dd713-b5c1-4679-bc11-4c1d88f8a2af","Automatic Unit Test Generation","Den Hollander, M.A.","Zaidman, A.E. (mentor); Boogerd, C.J. (mentor)","2010","While test generators have the potential to significantly reduce the costs of software testing and have the ability to increase the quality of the software tests (and thus, the software itself), they unfortunately have only limited support for testing object-oriented software and their underlying test generation techniques fail to scale up to software of industrial size and complexity. In this context, we developed JTestCraft, a state-of-the-art test generator for the Java programming that deals effectively with all object-oriented programming concepts, such as object array types, inheritance and polymorfism. Furthermore, JTestCraft can locate all relevant test cases due to the use of the novel Candidate Sequence Search algorithm. Other novel concepts introduced in this thesis include the Constraint Tree data-structure to improve scalability and the Heap Simulation Representation to simplify the implementation of the test generator. We evaluated JTestCraft by looking at its ability to generate tests that obtain high code coverage and compare the results to human crafted tests. In addition, the performance of JTestCraft is compared against similar tools. Finally, we give pointers for further research to improve the performance and usability of future test generators.","unit testing; software verification","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:502daf01-f767-4f35-854f-44f7ccd3dcea","http://resolver.tudelft.nl/uuid:502daf01-f767-4f35-854f-44f7ccd3dcea","Developing a Decision Support System for Business Model Design: The case of Software-as-a-Service","Daas, D.","Bouwman, W.A.G.A. (mentor); Overbeek, S.J. (mentor); Hartmann, L. (mentor); Hurkmans, A.A.M. (mentor)","2010","This thesis outlines the process of designing a decision support system (DSS) for business model design within an organizational context. The DSS is also developed and applied to several critical design issues for a specific Software-as-a-Service provider. An empirical analysis is carried out to estimate reservation prices, which are used as an input for the DSS.","decision support system; network-centric business model design; bundling; pricing; financial arrangements; software-as-a-service","en","master thesis","","","","","","","","","Technology, Policy and Management","Section of Information and Communication Technology","","Systems Engineering, Policy Analysis & Management","",""
"uuid:6afeebc9-b574-453d-ac21-5682f57686bc","http://resolver.tudelft.nl/uuid:6afeebc9-b574-453d-ac21-5682f57686bc","Understanding Ajax Applications by using Trace Analysis","Matthijssen, N.A.","Zaidman, A. (mentor); Storey, M.A. (mentor); Bull, I.R. (mentor); Van Deursen, A. (mentor)","2010","Ajax is an umbrella term for a set of technologies that allows web developers to create highly interactive web applications. Ajax applications are complex; they consist of multiple heterogeneous artifacts which are combined in a highly dynamic fashion. This complexity makes Ajax applications hard to understand, and thus to maintain. For this reason, we have created FireDetective, a tool that uses dynamic analysis at both the client (browser) and server side to facilitate the understanding of Ajax applications. Using an exploratory pre-experimental user study, we see that web developers encounter problems when understanding Ajax applications. We also find preliminary evidence that the FireDetective tool allows web developers to understand Ajax applications more effectively, more efficiently and with more confidence. We investigate which techniques and features contributed to this result, and use observations made during the user study to identify opportunities for future work.","Ajax; Program understanding; Software maintenance; Reverse engineering; Empirical study; Dynamic analysis; Web applications","en","master thesis","","","","","","","","2010-05-20","Electrical Engineering, Mathematics and Computer Science","Computer Science","","","",""
"uuid:d329225c-258f-4268-88fe-901fef621a3a","http://resolver.tudelft.nl/uuid:d329225c-258f-4268-88fe-901fef621a3a","Software maintenance in a data distribution service with complex event processing","Pesman, T.","","2010","This thesis covers the topic of software maintenance on a system which consists of a Data Distribution Service (DDS) and a Complex Event Processing (CEP) engine. Software maintenance on this system is hard to perform, because of the dependencies between the different components. This thesis answers the main research question: ”To which extent do existing software maintenance principles apply to changing a running software system based on a Data Distribution Service with Complex Event Processing?”. To answer this research question, an existing change request procedure is used as a basis to create a new change request procedure. A formalising method is added to have a formal way for the developer to analyse the impact of a change. This is needed because of the dependencies within the DDS/CEP system, it is easy to forget to change an important part of the system. The hardest part of the change request is the fact that the system is already running in a production environment, so if a mistake is made, data may be lost. To help with this complex problem a DDS monitoring tool is developed in this thesis, which visualises the structure of the DDS. This tool has more features to ease the maintenance of the system, such as highlighting edges in the graph with similar QoS settings. The case study is performed on a prototype of a system, to show this change request procedure is sufficient, which is verified with the tool.","DDS; CEP; Software maintenance; Data Distribution Service; Complex Event Processing","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:a0efdfbc-5ef8-47d2-85fc-6cc07d739130","http://resolver.tudelft.nl/uuid:a0efdfbc-5ef8-47d2-85fc-6cc07d739130","Haalbaarheidsstudie naar het gebruik van software-defined radar voor een adaptieve cruisecontrol","An, F.; Eroglu, I.; Slotema, M.","Van der Veen, A.J. (mentor); Leus, G.J.T. (mentor)","2010","Een onderzoek naar radartheorie om te gebruiken in een adaptive cruise control systeem.","software-defined radar; adaptive cruise control","nl","bachelor thesis","","","","","","","","2010-07-03","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Circuits and Systems","",""
"uuid:95f75994-d088-4026-a6e4-c4c7cb28e70a","http://resolver.tudelft.nl/uuid:95f75994-d088-4026-a6e4-c4c7cb28e70a","Designing great User Experience for a Web-Based Collaboration tool by gaining insights from the Generation Y Office workers","Sukirman, E.","Stappers, P.J. (mentor); Smulders, F.E.H.M. (mentor)","2010","There are two aims in this project. The first is to redesign product with great User Experience (UX). The second aim is to gain knowledge on how the company can shift their innovation process from technology push to market pull strategy thus becoming a customer-centric organization. Both aims challenge a designer in CREATING PRODUCT PEOPLE LOVE TO USE. I learned that collaboration is about people experiences that evolve and grow every single day. Creating great User Experience (UX) requires collaborative efforts from all stakeholders,thus UX must be regarded as a strategic discipline in the Company. I also concluded that context mapping method is very suitable to facilitate collaboration, especially in entrepreneurial environment. It is a well-structured method that provides control to achieve good result, but it gives freedom to be adjusted according to a particular company culture. This is just like what every entrepreneurs demand! The next learning is the fact that communication is not a one-shot effort. It must be done along with the continuous decision-makings and evaluating: INNOVATION process. By doing so, you will create perfect LOOP: magnifying innovation through collaboration.","participatory design; context mapping; user experience; web-based solution; generation y; entrepreneurship; collaboration; software","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Product Innovation Management","","Master of Science Strategic Product Design","",""
"uuid:cc85f9bc-78c1-4c85-b062-5821fcca6325","http://resolver.tudelft.nl/uuid:cc85f9bc-78c1-4c85-b062-5821fcca6325","Time Trace: Visual Project Management for Designers","Borthwick, M.E.G.","Pasman, G.J. (mentor); Stappers, P.J. (mentor)","2010","Time management is an essential part of every company, and the amount of energy devoted to keeping a business running smoothly can be surprisingly large. To keep track of the relevant factors there are a variety of management tools available, with different focuses and purposes. Companies tend to use a mix of these tools to fit their specific needs, and also use self-created tools such as whiteboards and checklists. Because time management is scattered over many tools, there is limited flexibility to update a project when something changes. The majority is text and numerically based, which makes them slower to update, and means that people cannot gain an overview of their situation without reading a lot. In the case of designers, the complexity of the situation increases. Most tools are suited to businesses with well-structured processes. This doesn’t match with the needs of designers, who have a less structured process. Expert designers tend to work intuitively, so time plans are followed ‘opportunistically’; only for as long as they are of benefit. Task durations are difficult to estimate, since there is great variation between projects. Also, one good idea can change a whole plan, so the need for a flexible tool is even greater. To have such a tool, that was used consistently, would also mean that planning knowledge could be recorded accurately for use in the future. Primary research conducted at Kiss the Frog Productions B.V. (referred to as KTF) confirmed these assumptions, and revealed deeper insights into what planning factors make a project effective and satisfying to work on. Using the results of this study, a prototype was created for a new time management system. This prototype was tested with four design companies to gain further insights for its development. The end result, TimeTrace, is a living, breathing time planning software for design companies, which gives an accurate visualisation of what is happening with all design projects at all times. The default screen is an overview, showing a visualisation of all projects. This screen is used to create popup windows, which extract the relevant information needed for planning activities, by any person at any moment. All visualisations can be directly manipulated to change the plan, without the need to type in a lot of text/numbers. Information is synchronised to update across the whole system. TimeTrace addresses the problems and insights uncovered throughout the project, and in doing so effects changes to three paradigms of current time management processes: Visual not Numerical: unlike most management programs, TimeTrace does not present project data using numbers. All time-related information is represented through visual proportions, and elements such as people and projects are represented through icons and colour codes. This makes it fast to ‘read’ and flexible to adjust. It is also more in keeping with how people perceive the passing of time; as proportions of their day orweek; not numerical figures. Shared Responsibility: the usual management scheme for companies is top-down; project managers make the decisions. TimeTrace encourages managers to benefit from their employees knowledge, by providing a system that can be contributed to by everyone, and ensures that everyone stays informed. Situational Awareness: Plans are usually used to provide a framework at the beginning of a project, and are intended to be followed as closely as possible, and updated when necessary. For designer, this close following of the plan is not a reality, so TimeTrace instead puts the focus on offering continuous situational awareness. Instead of making decisions based on the initial plan, it is possible to make decisions based on the reality of how the project and organisation is running at any given moment. It stays up to date by doubling as a financial system; when tasks are confirmed for financial purposes, they are fed back into the system and used to renew the visualisations. TimeTrace is also a very important record for the company. It stores all past project information, so that it may be of help in making future plans. Information about the usual duration of tasks is offered as suggestions, when a new plan is being made. It also keeps a record of the typical process of the company; by detailing the main phases and sub-phases of a project, project managers and designers are reminded of the full range of the tool box they can draw on for every project.","interface; time management; project management; information design; situational awareness; software; interaction; design","en","master thesis","","","","","","","Campus only","2011-03-19","Industrial Design Engineering","Industrial Design","","Master of Science Design for Interaction","",""
"uuid:8a144747-576f-431a-b091-79cebf20ba8b","http://resolver.tudelft.nl/uuid:8a144747-576f-431a-b091-79cebf20ba8b","The In?uence of Software Maintainability on Issue Handling","Luijten, B.J.H.","Zaidman, A. (mentor); Visser, J. (mentor); Van Deursen, A. (mentor)","2010","Ensuring maintainability is an important aspect of the software development cycle. Maintainable software will be easier to understand and change correctly. The Software Improvement Group (SIG) has developed a method to measure a software system's maintainability based on well-known code metrics. In this thesis we explore the relationships between this maintainability measure and the properties of issues reported for a project. We also describe the repository extraction tool that we built for this purpose. We investigate a number of basic issue properties and show that we cannot draw conclusions based on these metrics. Two visualisation techniques are used to gain a more detailed understanding of the issue handling process. The Issue Churn View shows quantitative changes in the open issues for a project and can be used to show the changes in activity in an issue tracker. The Issue Lifecycle View provides a more detailed view of the issue handling process and shows the age composition of the issues, as well as simultaneous events on multiple issues. The SIG quality profile model is applied to issue trackers, resulting in a model that allows us to compare systems based on the speed with which issues are resolved. By comparing the ratings from this model to system maintainability, we conclude that there is significant correlation between system maintainability and defect resolution time. The most correlated system properties are unit size and complexity and module coupling.","software repository mining; software maintainability; issue tracker","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:625c1ed1-45fc-4acd-a511-6c6bfea6dae4","http://resolver.tudelft.nl/uuid:625c1ed1-45fc-4acd-a511-6c6bfea6dae4","Architecture framework in support of effort estimation of legacy systems modernization towards a SOA environment","Anguelov, Z.V.","Van Deursen, A. (mentor); Van Diessen, R. (mentor); Gross, H. (mentor)","2010","Because of their poor Business/IT alignment, many legacy systems lack the flexibility to support rapid changes to the business processes they implement, required by today's enterprises. Furthermore, after many years of maintenance, there is a need to manage their resulting increased complexity and maximize asset utilization through reuse. The third complicating circumstance is that these legacy systems cannot simply be replaced as it is too expensive and risky. For these three reasons, legacy systems are modernized towards a Service Oriented Architecture. This thesis presents a framework for performing an impact analysis of such a modernization. It supports the trade-off analysis, needed in the planning phase, for finding the optimal selection of modernization strategies and judging their yield. The impact is expressed through the estimation of, on the one side, the effort and, on the other side, the gain of the changes these modernization strategies entail. The thesis concentrates on one of the many types of changes in modernization -- the architectural and design changes to the software system. The presented framework structures current approaches to modernization in a set of class definitions, system model relationships and a process description. This is done according to the effort they produce, preparing them for its estimation. For this effort estimation, this thesis introduces a Rating Model for quantifying the modernization effort using the system models of the framework. This quantification is done through the identification of so-called Points of Modernization, a categorization of the modernization strategies and a set of effort indicator metrics. Based on this framework, this thesis also presents an experiment. For a subject legacy system, concrete approaches are shown for the instantiation of the framework models and the subsequent effort estimation is done using the indicator of Scattering. The analysis of the resulting effort and its relation to the gain show the optimal solutions for the modernization of the subject system. Concluding, this thesis discusses the feasibility of the approach and the future work such as more quantitative research on the rest of the effort indicators.","SOA; Service Oriented Architecture; software architecture; Modernization; legacy system; effort estimation; modernization effort; trade-off analysis; impact analysis; metrics","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:ed6736b9-3284-4f45-bf29-b345afd4be04","http://resolver.tudelft.nl/uuid:ed6736b9-3284-4f45-bf29-b345afd4be04","Automatic Status Updates in Exacts Global Development Process","Valkema, M.","Hurkmans, T. (mentor); Van Solingen, D.M. (mentor); Van Deursen, A. (mentor)","2009","Due to a competitive business environment and demanding customers, many companies turn to globally distributed software development and agile methodologies. Globally distributed software engineering can bring great advantages in reducing costs, reducing time to market, and give access to a larger pool of skilled resources. Agile methodologies acknowledge many of the development challenges companies face, such as changing customers requirements and the necessity to have frequent releases. Introducing agile by itself can be a challenge for a big organization, especially when a blend is made between agile and distributed development, since these have some contradictory features. The global company Exact faces some challenges when introducing agility into its business processes. This research has as a goal to introduce agile into the globally distributed development process of Exact. It is hard to introduce agile as a whole, therefore we address the biggest challenges faced within Exacts development process first. One of the main challenges faced within Exact is communication between the globally dispersed product management and product development teams. An agile practice addressing this communication challenge is to use automatic status updates and product updates generated by an automated build process. This research will explore what features of continuous integration are useful for Exact and the created system will be evaluated. The ideas presented in this thesis have not yet been tested and evaluated on a large scale, due to time restrictions. However a prototype has been tested on a small scale for one project, and the initial responses from product development and product management were positive. More important, a shift in perception occurred at product development to support a more open development process.","globally distributed software engineering","en","master thesis","","","","","","","","2010-01-08","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:1999a3fb-36c2-4898-95a4-bc2040858780","http://resolver.tudelft.nl/uuid:1999a3fb-36c2-4898-95a4-bc2040858780","DEMO applied to Financial Services","Boedhram, V.R.K.; Algoe, S.S.R.W.","Dietz, J.L.G. (mentor)","2009","Often organizations do not exactly know what they desire, when it comes to information systems. Professional companies like ForMetis are needed to give advice and design tailor made information systems for organizations that have the need of it. To do so, one usually uses a software developing methodology. ForMetis has developed such a methodology with their ten year of experience (The ForMetis methodology). The DEMO methodology is a powerful tool that has proven itself successful in the modeling of organizations. DEMO methodology models the essence of an organization and claims to be coherent, consistent, comprehensive and concise. It is a very powerful tool for identifying transaction of an organization and also the communication with the external actors. DEMO can be used as an aid to design information systems and can check the completeness of these systems whether it covers the essential business processes. The ForMetis methodology consists of the following phases: planning, analysis, design, implementation and system. The analysis and design phase are the most important phases. In these phases requirements are retrieved in an informal way and are written on large sheets, which are not reusable. Informal specifications are made and often the implementation is the specification. The new, so called F-DEMO methodology was discussed and a postmortem case (intermediary) was used to illustrate the added value of DEMO. The new methodology is the ForMetis methodology extended with DEMO in the analysis and design phase. In these phases the Construction Model, Proces Model and the State Model are added. These models are a valuable addition to the derivation of requirements and the making of specifications. In order to evaluate the use of F-DEMO a survey was held to check how many of the findings that were raised when producing the information system at the intermediary could be prevented. The findings were categorized in implementation, requirements, usability, misunderstandings, wishes and irrelevant types. From all these findings 33,1 percent could be prevented using the new methodology. The project time is also reduced. Therefore, the recommendation is to start using the F-DEMO methodology in future projects.","DEMO; Financial Service; Services; Business Processes; Business; Intermediary; Software Development; Information System; Methodology; Electronic Dossier","en","master thesis","","","","","","","","2009-08-28","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:9fe84fa1-0f2a-4a75-8c5b-739527ddbaa7","http://resolver.tudelft.nl/uuid:9fe84fa1-0f2a-4a75-8c5b-739527ddbaa7","Identifying Cross-Cutting Concerns Using Software Repository Mining","Mulder, F.","Zaidman, A. (mentor); Van Deursen, A. (mentor)","2009","Cross-cutting concerns are pieces of functionality that have not been captured into a separate module. They form a problem as they hinder program comprehension and maintainability. Solving this problem requires first identifying these cross-cutting concerns in pieces of software. Several methods for doing this have been proposed but the option of using software repository mining has largely been left unexplored. That technique can uncover relationships between modules that may not be present in the source code and thereby provide a different perspective on the cross-cutting concerns in a software system. We perform software repository mining on the repositories of two software systems for which the cross-cutting concerns are known: JHotDraw and Tomcat. We evaluate the results we get from our technique by comparing them with those known concerns. Based on the results of the evaluation, we make some suggestions for future directions in the area of identifying cross-cutting concerns using software repository mining.","cross-cutting concerns; software repository mining; aspect mining; frequent itemset mining","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:f795a322-00cb-47ab-ae86-87cef75fd836","http://resolver.tudelft.nl/uuid:f795a322-00cb-47ab-ae86-87cef75fd836","Logging TOPdesk Enterprise - Bachelorproject Technische Informatica","De Gans, M.; Verloop, D.","Geers, H.J.A.M. (mentor); Spilker, R. (mentor); Grootjans, R.J. (mentor); Sodoyer, B.R. (mentor)","2009","Eindverslag Bachelorproject Technische Informatica. De ontwikkeling van een analyse systeem voor logbestanden gegenereerd door de applicatie TOPdesk Enterprise.","logfiles; analyzer; TOPdesk; software engineering; database","nl","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Engineering","","","",""
"uuid:cb213f3a-f6f5-41e0-b56f-07f13f37af22","http://resolver.tudelft.nl/uuid:cb213f3a-f6f5-41e0-b56f-07f13f37af22","Technological support for distributed agile development","Dullemond, K.; Van Gameren, B.J.A.","Van Solingen, D.M. (mentor); Sodoyer, B.R. (mentor); Van Deursen, A. (mentor)","2009","Because of the distance between the dispersed development locations, Global Software Development (GSD) is confronted with challenges regarding communication, coordination and control of the development work. At the same time, agile software development is strongly built upon communication between engineers and has proven its benefits, although, mostly on one single site. As such, it might be advantageous to combine GSD with agile development. This blend however is not straightforward since the distributed and agile development approaches might have conflicting convictions. In this thesis we will discuss the advantages and challenges of combining GSD with agile development based on a literature-based research. The main results presented in the theoretical part of this thesis (Part I through V), are: (i) aspects of agile software development, (ii) benefits and challenges associated with these in relation to GSD, (iii) categories of technological support for agile GSD, (iv) a framework depicting the mutual relations among them and (v) a discussion regarding specific technologies that support collaborative development in relation to this framework. Based on one of the recommendations we make in the theoretical part of this thesis we also perform practical research (Part VI) in which we define a list of requirements for an Integrated Collaborative Development Environment (ICDE) and show the technical feasibility of a number of concepts which realize these.","global software development; agile software development; benefits; challenges; aspects of agile software development; categories of technological support for agile GSD; technology; Integrated Collaborative Development Environment; categories of technological support for GSD","en","master thesis","","","","","","","","2009-06-19","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:8660f6f6-248a-4c50-8127-e8f8b3aab582","http://resolver.tudelft.nl/uuid:8660f6f6-248a-4c50-8127-e8f8b3aab582","Evaluating Software Security Aspects through Fuzzing and Genetic Algorithms","Zhang, Y.","Van der Raad, K. (mentor); Botma, B. (mentor); Moonen, L. (mentor); Gross, G. (mentor); Van Deursen, A. (mentor)","2008","Improve software security evaluation by combining fuzzing and genetic algorithms.","fuzzing; genetic algorithm; software security","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:f3abc79e-6b62-4faf-a977-3845bb3ebc5c","http://resolver.tudelft.nl/uuid:f3abc79e-6b62-4faf-a977-3845bb3ebc5c","JRET: A tool for the reconstruction of sequence diagrams from program executions","Voets, R.","Cornelissen, S.G.M. (mentor)","2008","As opposed to static analysis, in which source code is inspected in order to increase program understanding, dynamic analysis concerns the actual execution of a program and the collection of runtime data. Several strategies to retrieve dynamic information exist, including source code instrumentation and the use of a customized debugger. Since the execution of a program is traced, one will be provided with detailed information on important aspects such as polymorphism and late binding. This detailed information, however, comes at a price. A major drawback of dynamic analysis is the vast amount of data produced. Visualization tools need to deal with this problem by, for example, applying certain abstractions in order for the information to become human-readable. In this research, we developed such a visualization tool that visualizes the execution of programs through sequence diagrams: JRET. We describe the strategy used, show how it attempts to tackle the aforementioned problem, and illustrate its contribution to program comprehension through a case study.","program comprehension; software maintenance; visualization; sequence diagrams; java","en","master thesis","TU Delft, Electrical Engineering, Mathematics and Computer Science, Information and Communication Technology (ICT)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:2edcbb82-6649-454e-8195-129055a0b13d","http://resolver.tudelft.nl/uuid:2edcbb82-6649-454e-8195-129055a0b13d","Studying Co-evolution of Production and Test Code Using Association Rule Mining","Lubsen, Z.A.","Zaidman, A.E. (mentor)","2008","Unit testing is generally accepted as an aid to produce high quality code, and can provide quick feedback to developers on the quality of the software. To have a high quality and well maintained test suite requires the production and test code to synchronously co-evolve, as added or changed production code should be tested as soon as possible. Traditionally the quality of a test suite is measured using code coverage, but this measurement does not provide insight in how tests are used by developers. In this thesis we explore a new approach to analyse how tests in a system are used based on association rules mined from the system’s change history. The approach is based on the reasoning that an association rule between two entities, possibly of a different type, is a measure for the co-use of the entities. Case studies show that analysing all the resulting rules allows us to uncover the distribution of programmer effort over pure coding, pure testing, or a more test-driven practice. Another application of our approach is that we can express the number of tests that are truly co-evolving with their associated production class.","software testing; co-evolution; software maintenance; datamining","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","Software Engineering","",""
"uuid:48b95ec8-a5aa-477d-8640-64a48d3d4fdf","http://resolver.tudelft.nl/uuid:48b95ec8-a5aa-477d-8640-64a48d3d4fdf","A Quantitative Model for Hardware/Software Partitioning","Meeuws, R.J.","Bertels, K.L.M. (mentor)","2007","Heterogeneous System Development needs Hardware/Software Partitioning performed early on in the development process. In order to do this early on predictions of hardware resource usage and delay are necessary. In this thesis a Quantitative Model is presented that can make early predictions to support the partitioning process. The model is based on Software Complexity Metrics, which capture important aspects of functions like control intensity, data intensity, code size, etc. In order to remedy the interdependence of the software metrics a Principal Component Analysis performed. The hardware characteristics were determined by automatically generating VHDL from C using the DWARV C-to-VHDL compiler. Using the results from the principal component analysis, the quantitative model was generated using linear regression. The error of the model di?ers per hardware characteristic. We show that for ?ip-?ops the mean error for the predictions is 69%. In conclusion, our quantitative model can make fast and su?ciently accurate area predictions to support Hardware/Software Partitioning. In the future, the model can be extended by introducing extra software metrics, using more advanced modeling techniques, and using a larger collection of functions and algorithms.","reconfigurable computing; regression analysis; software metrics; C language; hardware description languages; principal component analysis","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Computer Engineering","",""
"uuid:68d154a1-74bc-4db1-b830-5396030b3493","http://resolver.tudelft.nl/uuid:68d154a1-74bc-4db1-b830-5396030b3493","Interactieve website voor publicaties","Koning, A.S.; Verwoerd, R.J.T.","Wong, S. (mentor); Van Genderen, A. (mentor)","2004","This document describe the development cycle of the website for publications for the Delft University of Technology, Faculty Electrical Engeneering, Mathematics and Computer Science, Department Paralel and Distributed Systems. This include analysis, implementation, testing and the obstacles we came across during this cycle.","software engineering; website","nl","bachelor thesis","TU Delft, Electrical Engineering, Mathematics, Computer Sci, Information and Communication Technology (ICT)","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""
"uuid:1a3566c0-5886-46e3-a7cd-39fadddf045c","http://resolver.tudelft.nl/uuid:1a3566c0-5886-46e3-a7cd-39fadddf045c","Over de ontwikkeling van telecommunicatiesoftware","Van der Pols, E.K.H.","De Kroes, J.L. (mentor); Nijhof, J.A.M. (mentor)","1989","De problemen die optreden bij de ontwikkeling van telecommunicatiesoftware, bijvoorbeeld voor een X.400 - elektronisch berichtensysteem, worden tegenwoordig bestreden met onder andere nieuwe ontwikkelmethoden en specificatietalen. Dit verslag geeft een inleiding op het probleemgebied en een overzicht van de huidige stand van zaken.","Software-ontwikkeling; softwaretechniek; levenscyclusmodellen; telecommunicatiesoftware; specificatietalen","nl","student report","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunicatie- en Verkeersbegeleidingssystemen","","","",""
"uuid:ae726a43-5d3c-4f81-883a-780c431c67f4","http://resolver.tudelft.nl/uuid:ae726a43-5d3c-4f81-883a-780c431c67f4","Marine Spill Simulation Software Set","Van Huijstee, J.J.A.","Bijker, E.W. (mentor); Massie, W.W. (mentor)","1985","The Marine Spill Simulation Software Set is based on physical and information theoretical components. The physical component of the simulation model consists of a system of differential and algebraic equations that describe processes which influence the motions and characteristics of oil at sea. This report deals especially with the information theoretical components of the simulation model. At the beginning of this report the motivations in choosing a microcomputer for this application - instead of a mainframe - are explained. Thereupon attention is transferred from hardware to software. The reasons for selecting Fortran 77 as programming language are stated and the user-friendly elements of the model are discussed and illustrated with a few examples. The simulation software structure clearly shows that the model is divided in three major modules namely a data accumulation and processing module, an actual simulation module and an output module. All input data are summed up and the way these data are handled is discussed. The present-day required processing of input data and future input possibilities bring the survey of the data accumulation and processing module to a close. Efficient integration methods for the simulation processes are selected to improve run speed. The handling of output data and the recommended future output presentation are discussed next. Finally, simulation runs are done with test data to check is the model functions correctly. Furthermore run time, accuracy, efficiency and stability are determined and assessed. To conclude this summary the reader is kindly recommended to try out the Marine Spill Simulation Software Set.","oil spill; software","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","Coastal Engineering Group","",""