"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:a0336a50-d169-45cb-abe7-097ba8d15084","http://resolver.tudelft.nl/uuid:a0336a50-d169-45cb-abe7-097ba8d15084","Assessment of Parkinson's Disease Severity from Videos using Deep Architectures","Yin, Z. (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Gemert, J.C. (mentor); Dibeklioglu, Hamdi (mentor); Wang, Huijuan (graduation committee); Wang, Ziqi (mentor); Geraedts, Victor (graduation committee); Delft University of Technology (degree granting institution)","2020","Parkinson's disease (PD) diagnosis is based on clinical criteria, i.e. bradykinesia, rest tremor, rigidity, etc. Assessment of the severity of PD symptoms, however, is subject to inter-rater variability. In this paper, we propose a deep learning based automatic PD diagnosis method using videos recorded during the assessment with the Movement Disorders Society - Unified PD rating scale (MDS-UPDRS) part III. Seven tasks from the MDS-UPDRS III are investigated, which show the symptoms of bradykinesia and postural tremors. We demonstrate the effectiveness of automatic classification of PD severity using 3D Convolutional Neural Network (CNN) and the PD severity classification can benefit from non-medical datasets for transfer learning. We further design a temporal self-attention (TSA) model to focus on the subtle temporal vision changes in our PD video dataset. The temporal relative self-attention-based 3D CNN classifier gives promising classification results on task-level videos. We also propose a task-assembling method to predict the patient-level severity through stacking classifiers. We show the effectiveness of TSA and task-assembling method on our PD video dataset empirically.","Parkinson's Disease; Deep learning; Transfer learning; Self-attention; Multi-domain learning","en","master thesis","","","","","","","","","","","","","",""
"uuid:2a0538ba-c79e-4572-bdb7-c82db303f169","http://resolver.tudelft.nl/uuid:2a0538ba-c79e-4572-bdb7-c82db303f169","Question Retrieval based on Community Question Answering: Baseline Selection among Retrieval Models on two Datasets","Yang, Wanning (TU Delft Electrical Engineering, Mathematics and Computer Science)","Hauff, Claudia (mentor); Wang, Huijuan (graduation committee); Zuñiga Zamalloa, Marco (graduation committee); Delft University of Technology (degree granting institution)","2019","Community question answering (CQA) platforms provide a social environment for users to share knowledge online. Users can submit complex and subjective questions on CQA platforms and then derive the desired answer from other community users. A large number of user-generated data has been produced by various CQA sites (e.g., Quora, StackExchange) and been used in different CQA researches. Question retrieval task is one of the popular CQA tasks aiming at solving the overloading issue of CQA platforms and increasing user satisfaction by reducing their waiting time. A question retrieval system is expected to automatically retrieve similar questions from the CQA archives regarding a new question, and the answers to similar questions are returned to users directly.
Different information retrieval (IR) approaches have been proposed for question retrieval task ranging from the conventional retrieval models to the learning to rank models and neural ranking models. However, the IR community is now facing the issue of overusing the weak baselines. Thus, it is hard for researchers to identify the reported improvement of the newly-proposed methods, which greatly impedes the development of the community. Some researchers have already proposed several competitive baselines for ad-hoc retrieval task, but currently, the proposals of strong baselines for question retrieval are still not enough. Thus, this work targets on identifying the suitable baselines for question retrieval task on different datasets. We conduct an empirical comparison among different retrieval models on two representative datasets and analyze the performance of models on different question sets. Analyzing on CQA questions is challenging since the CQA questions are more diverse and complex, compared to the questions on traditional question answering (QA) (e.g., Wikipedia) system as well as the queries on traditional search engine (e.g., Google). Our work investigates the impact of the question from two perspectives. We first display how retrieval performance changes on various question sets (e.g., questions with different lengths and different levels of specificity) and then explain the reasons for the performance changes. Moreover, we conduct an error analysis to reveal the hard types of questions for different retrieval models on two datasets. In order to overcome the existing weakness of the retrieval models, we further select two techniques that have already proven effective in other retrieval tasks. We hypothesize that the two techniques can also be useful on question retrieval task. We then implement the two techniques on our datasets to validate the hypothesis. Our findings show that one of them can not help to enhance the retrieval effectiveness of models due to the different characteristics of task design while another technique successfully demonstrates the additive effectiveness gains. Based on our findings, we find out the suitable baseline models on different datasets as well as emphasize their relative strength and limitation. We believe our work can provide useful guidance on how to select an appropriate baseline for future works on question retrieval.","Community Question Answering; Baseline Selection; Question Retrieval","en","master thesis","","","","","","","","","","","","","",""
"uuid:47dd983a-0fef-4a8a-a100-5d0603cae9d5","http://resolver.tudelft.nl/uuid:47dd983a-0fef-4a8a-a100-5d0603cae9d5","Text-based conversational interface as an alternative to a crowdsensing mobile application","Thuraka, Neha (TU Delft Electrical Engineering, Mathematics and Computer Science)","Bozzon, Alessandro (mentor); Lofi, Christoph (graduation committee); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2019","Crowdsensing is a powerful tool to easily sense diverse physical environments by collecting data from an undefined network of people. With advancements in smartphone technology, there has been an increase in the use of mobile applications to perform crowdsensing tasks. However, previous work shows that mobile applications have issues with attracting and retaining users, thus limiting the utility of crowdsensing as a data collection technique. To mitigate these issues, we propose the use of conversational agents (chatbots) as an alternative to custom mobile applications for crowdsensing applications. We hypothesize that the use of commonly used text-based applications (e.g., Telegram) enriched with the automated conversational capabilities can increase the attraction and retention of crowdsensing participants.
In this thesis, we designed and implemented a crowdsensing system that supports the execution of mobile and chatbot interface. We propose a design of the text-based conversational interface that provides different elements and features of a traditional mobile application. To compare these two interfaces for performing crowdsensing tasks and to understand the differences in terms of user engagement and usability, we conducted two experiments on the TU Delft campus with students as the participants. Based on the location of the experiment, we designed four task domains and three types of tasks.
In the first experiment, we organized a 'between-subjects' study. We recruited 80 students to analyze user engagement and usability in a quantitative fashion. The experiment shows that chatbot has better user engagement and usability than the mobile application. We conducted a qualitative survey to understand the underlying reasons behind the participation patterns. Analysis of the results of this survey shows that the unavailability of the participants and the assignment of inappropriate tasks are the main reasons behind non-participation of some students.
To deepen our analysis, we organized the second experiment as a 'within-subjects' study with 10 participants in a controlled environment. The experiment shows that all participants unanimously preferred chatbot over the mobile application to perform crowdsensing tasks.
As a result of both experiments, we conclude that the text-based conversational interface can be used as an alternative to the mobile application to execute crowdsensing tasks and the former is more engaging than a mobile application interface for crowdsensing applications.
Science, the workload for teaching assistants and instructors has skyrocketed. To
reduce this workload, automated tools can be used to make the grading process easier. This paper describes the development of AuTA (Automatic Teaching Assistant), a tool that will help instructors and teaching assistants analyze and grade programming assignments and provide useful feedback to the student.","Thesis; Education; Bachelor; Code Quality; Feedback","en","bachelor thesis","","","","","","","","","","","","","Labrador",""
"uuid:00aac32f-e154-4181-baea-c7c00994da12","http://resolver.tudelft.nl/uuid:00aac32f-e154-4181-baea-c7c00994da12","Feasibility Study of LUFAR","Liefaard, Maxim (TU Delft Electrical Engineering, Mathematics and Computer Science); Bruens, Raoul (TU Delft Electrical Engineering, Mathematics and Computer Science); van Hassel, Dana (TU Delft Electrical Engineering, Mathematics and Computer Science); Noorthoek, Sterre (TU Delft Electrical Engineering, Mathematics and Computer Science)","Abeel, Thomas (graduation committee); Verma, Maneesh (mentor); Verhoeven, Chris (mentor); Visser, Otto (graduation committee); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2019","With the steady increase in space missions, enabled through technological advances and increase of commercialisation within the space flight industry, both more and increasingly complex missions can be designed for space. To this end, the Lunar Zebro project competes within this field through its small lunar rover design, drastically decreasing deployment costs and risk of the mission. The road map of Lunar Zebro aims to have a multitude of rovers deployed on the Moon, being able to complete several tasks like exploring, observing, and mapping. Since this concept of rover cooperation adds a novel level of complexity to the mission, a feasibility study is required to look into the difficulties of navigating the Moon with a larger group of rovers. LunarSim is the software package developed during this project. LunarSim aims to facilitate a simulation environment in which Lunar Zebro rovers and space mission designs can be tested and validated. To legitimise the workings of the simulation, a few scenarios have been developed to test the core functionalities of the software product. These scenarios are based on phases in a practical mission plan that consists out of navigating to and observing a crater location. The scenarios is evaluated through examination of a set of defined fitness criteria. In this report, the reader will find documentation on the development process of LunarSim: the simulation in Unity, the ROS back-end, and the bridge between these two systems. Additionally, the report elaborates how the developed software was used to aid in the feasibility study of LUFAR. First, initial research and requirements are formulated to define the scope of the simulation, after which the software architecture is introduced. Then, the systems implemented for the simulation are explained. Subsequently, the implemented rover behaviour algorithm that was used for testing is explained, with additional resources on how to develop a new custom rover behaviour. After this, an evaluation is given of the simulation based on the initial requirements and research with future research and concluding remarks. At the end of the report, the technical specifications in terms of software architecture, simulation environment, and rover behaviour are defined to give an in-depth view of LunarSim.","Space; Moon; Rover; Mission design; Simulation; Multi-Agent System; ROS; C#; Unity; C++; Systems Engineering","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","Lunar Zebro",""
"uuid:c2c6ee21-de0d-4a6b-8d78-7ee7de1f1e00","http://resolver.tudelft.nl/uuid:c2c6ee21-de0d-4a6b-8d78-7ee7de1f1e00","Estimatic","Rietveld, Jip (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology); de Vries, Rolf (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology); de Boer, Jaap (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology); Hondelink, Dieuwer (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","Bozzon, Alessandro (mentor); Visser, Otto (graduation committee); Wang, Huijuan (graduation committee); Janssen, Richard (graduation committee); Delft University of Technology (degree granting institution)","2019","Amsterdam Airport Schiphol has 5 runways, each of which can be used for take-off or landing of aeroplanes. The weather heavily influences which runway configuration air traffic control might pick. Airport Forecasting Service (AFOS) predicts which configuration of runways works most efficiently given a set of expected weather conditions and the standard deviations of wind components. These standard deviations give the system an indication of the accuracy of the weather forecasts.
Currently, the KNMI (Royal Netherlands Meteorological Institute) is the only meteorological institute that provides these standard deviations along with the weather forecast. This raises the main research question of this report: Is it possible to make accurate enough estimations of the standard deviation of wind direction and wind speed using historical data and future weather expectations. Estimating these standard deviations has been researched with two different approaches: a statistical method approach and a machine learning approach.
Statistical Methods Four fitting methods have been researched in search of the best statistical model to estimate the standard deviation of wind direction and speed: the Maximum Likelihood Method (MLM) and three Least Square Method implementations of a Weibull, Minimum Weibull and Double Weibull distribution. The performance of aggregates on the outcome of these four methods was also researched. One case takes the minimum standard deviation of the four, the other takes the mean.
MLM not only performs the best but also performs most consistently of the four fitting methods. Taking into account aggregates, MLM is more consistent than the minimum method but the minimum method outperforms it. Neither of these methods managed to meet the success criteria.
Machine Learning In regards to machine learning, the problem of estimating the standard deviations of wind direction and wind speed is a regression problem. The following machine learning models have been researched for Estimatic: MLPN, LSTM RNN, ERNN and RBFN.
LSTM RNNs outperform MLPNs, RBFNs and ERNNs for both wind direction and speed standard deviation estimation. LSTM RNN performance did not meet the success criteria.
The research concludes that it is not possible to make accurate enough estimations of the standard deviation of wind components using the historical data and future weather expectations available for Amsterdam Airport Schiphol.","Wind speed; Wind direction; Wind; Schiphol; Weather forecast; Standard deviation; KNMI; Machine learning; Statistical methods; Statistics","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","","52.3105386, 4.7682744"
"uuid:041533eb-6010-418e-b3d1-80ff7cc4996b","http://resolver.tudelft.nl/uuid:041533eb-6010-418e-b3d1-80ff7cc4996b","Server Program for Retail RFID System with advanced message handling","Beijen, Mike (TU Delft Electrical Engineering, Mathematics and Computer Science); Chong, Kevin (TU Delft Electrical Engineering, Mathematics and Computer Science); Holland, Callum Robert (TU Delft Electrical Engineering, Mathematics and Computer Science); Keller, Glenn (TU Delft Electrical Engineering, Mathematics and Computer Science)","Aniche, Maurício (graduation committee); Pawelczak, Przemek (graduation committee); Visser, Otto (graduation committee); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2019","Our challenge was to create a server program for retail RFID system with advanced message handling. However, RFID software solutions are heavily dependent on the requirements and use cases of the system. The developed solution allows for convenient interaction with RFID tags through different components of the designed system. The complete system has been developed with scalability and maintainability in mind and is thoroughly tested using unit testing, integration testing and end-to-end testing.","","en","bachelor thesis","","","","","","","","2024-07-03","","","","Computer Science and Engineering","",""
"uuid:c2458e36-234b-43cc-965e-b5d26f0b8809","http://resolver.tudelft.nl/uuid:c2458e36-234b-43cc-965e-b5d26f0b8809","Material Tracking System","Edixhoven, Tom (TU Delft Electrical Engineering, Mathematics and Computer Science); van Geffen, Hunter (TU Delft Electrical Engineering, Mathematics and Computer Science); Kruit, Bas (TU Delft Electrical Engineering, Mathematics and Computer Science); Smit, Mels (TU Delft Electrical Engineering, Mathematics and Computer Science)","Finavaro Aniche, Mauricio (mentor); Visser, Otto (graduation committee); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2019","For a steel company it is advantageous to be able to easily track steel through the production process. At Tata Steel this is currently done with the Material Tracking Table. However, generating this table takes months. Therefore a new system had to be developed. This paper describes the building of such a new system, which generates this Material Tracking Table in less than 1 hour, as well as the related systems concerning the acquisition of the input data and the visualisation of the resulting output data.","clustering; data visualisation; Web application; framework; memory management","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:1c523b11-c220-43da-b4c2-a712d6fee8d4","http://resolver.tudelft.nl/uuid:1c523b11-c220-43da-b4c2-a712d6fee8d4","Automated Transaction Monitoring","Kostense, Bastijn (TU Delft Electrical Engineering, Mathematics and Computer Science); Hageman, Rico (TU Delft Electrical Engineering, Mathematics and Computer Science); van der Wilk, Hilco (TU Delft Electrical Engineering, Mathematics and Computer Science); van Walraven, Bram (TU Delft Electrical Engineering, Mathematics and Computer Science)","van den Oever, Sander (mentor); Visser, Otto (graduation committee); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2019","For the past 10 weeks, we have been tasked with improving the performance of the transaction monitoring system of bunq, an internationally active mobile bank. bunq has requested that we improve this system by automating the training of the machine learning model, providing better input data for this model and creating additional machine learning models. During this project, we have been working at the offices of bunq on this system. This thesis will give an overview of our research, software design process and implementation.","Transaction Monitoring; Fraud; Machine Learning","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:dfa920b8-613d-4f93-9a1b-7c8c60268308","http://resolver.tudelft.nl/uuid:dfa920b8-613d-4f93-9a1b-7c8c60268308","Computer Vision for Exam Grading: Final Report","Young On, Ruben (TU Delft Electrical Engineering, Mathematics and Computer Science); van de Kuilen, Richard (TU Delft Electrical Engineering, Mathematics and Computer Science); Bijl, Robin (TU Delft Electrical Engineering, Mathematics and Computer Science); Leistra, Hidde (TU Delft Electrical Engineering, Mathematics and Computer Science); Jugariu, Timo (TU Delft Electrical Engineering, Mathematics and Computer Science)","Hugtenburg, Stefan (graduation committee); Akhmerov, Anton (mentor); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2019","Grading exams is a time-consuming activity for teachers. Zesje is an open-source tool created to aid teach-ers in exam grading and streamline the grading process. Zesje currently uses computer vision techniques torealign images, and automatically find student numbers. However, teachers can currently only use Zesje tograde questions manually. Moreover the computer vision capabilities of Zesje can be improved. To make iteasier to grade exams, it should be possible for teachers to have multiple choice questions graded automati-cally. This project describes various improvements for Zesje, most notably using computer vision for the auto-matic grading of multiple choice questions, improving the accuracy of aligning scanned submissions, andautomatically detecting blank solutions. The team had to make several choices regarding implementations and choice of technology. Design goalswere also created to serve as a guideline for the project. At the end of the project, with the features imple-mented by the team, Zesje can automatically grade multiple choice questions, identify blank solutions andhas the corresponding front-end changes that allow the user to create multiple choice checkboxes on theexam PDF. These features have been tested extensively. The use of Zesje also poses some ethical challenges. Using automated grading may result in the event thatsome submissions may never be seen by a grader. By using benchmarks to compare the performance of processing scans in Zesje, the team found out thatthe grading time has greatly been reduced.","computer vision; auto grading; digital grading; open source","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","Bachelor Project","52.0021256, 4.3732982"
"uuid:1f77ead5-58be-4f1d-b176-817e8761d283","http://resolver.tudelft.nl/uuid:1f77ead5-58be-4f1d-b176-817e8761d283","TelaSol: A Coach Cockpit Application","Vijlbrief, Sam (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Intelligent Systems); Kroon, Mirco (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Intelligent Systems); Janssen, Boris (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Intelligent Systems); Gerlach, Laurens (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Intelligent Systems)","Migut, Gosia (mentor); Dukalski, Rado (mentor); Wang, Huijuan (graduation committee); Visser, Otto (graduation committee); Delft University of Technology (degree granting institution)","2019","Team Sunweb, a professional cycling team and our client, is constantly looking for innovations to help them win races. They tasked us with creating an application which could assist coaches with determining the strategy during a race. This application, which we dubbed TelaSol, is supposed to run on a tablet that is mounted on the dashboard inside the coach car. For this project we developed an application that allows races to be prepared on a desktop computer and tracked during a race on a tablet-optimized interactive dashboard. On this dashboard, there will be information on the riders, the route and comments that can be added before the race.
During development we have considered existing solutions, relevant literature and useful technologies to get an idea of what was possible and how we could achieve our goal. We used this knowledge to create our initial set of requirements. We then proceeded development of application using an agile approach, which involves regular feedback moments from our client to update the requirements and adjust our focus accordingly. To verify the quality of our product we relied on a combination of automated tests, user testing and validation through the client.
Initially the application was supposed to integrate live data coming from the riders during the race, but due to a regulation change we had to change our focus. Instead, we focused primarily on creating the application for playback purposes, while still keeping it adaptable to live data. The application performs the main tasks that were initially defined properly. After further development on live data and extensive situational testing, the app can be used to its full potential. Using TelaSol, Team Sunweb will improve their ability to analyze races and increase their chances of winning.
After researching state of the art Machine Learning models for price recommendation, the architecture of the system was designed. The supplied data was preprocessed, after which a custom Genetic Algorithm was developed for optimising models and ensembles. After validation on real-life company data, a comparison using empirical metrics was conducted. We use these empirical metrics to show that a bagging ensemble is the most efficient and accurate model for this purpose. This bagging ensemble outperformed the currently implemented functions, whilst adhering to the set boundaries on response times. Lastly, recommendations are made to the company with an overview of potential future work in this subject.
In this thesis, we develop a method for the identification of scientific memes — ngrams of length 1 through 4, denoting scientific concepts — propagating within online communities. With data extracted from science-oriented correspondence extracted from five communities on the online discussion platform Reddit, and five communities on the online question and answer platform StackExchange, we perform a large-scale automated evaluation in which we find that memes identified in these communities correspond to the titles of Wikipedia articles; and a small-scale human evaluation in which we find that the identified memes represent relevant concepts to the community’s scientific field.
Furthermore, we introduce a slight adaptation of this method to elucidate one of memetics’ predictions: the occurrence of interactions between memes, where the occurrence of one meme has a positive or negative influence on the propagation of another meme. To evaluate this method for the identification of meme interactions, we construct meme interaction networks, in which we find that the most central memes correspond to the most relevant scientific concepts.
We find that our methods are able to extract key concepts within online communities, identifying thousands of relevant concepts from millions of candidate ngrams. Thus, our method may contribute to contemporary text mining research, and could be used in place of, or in conjunction with current approaches, such as TF-IDF or LDA.","Memes; Memetics; Reddit; Epidemiology; Interaction networks; Information cascades","en","master thesis","","","","","","","","","","","","Computer Science | Web Information Systems","",""
"uuid:90d24571-dbca-4bd9-afe6-af718ea3d5c8","http://resolver.tudelft.nl/uuid:90d24571-dbca-4bd9-afe6-af718ea3d5c8","Helping Chatbots To Better Understand User Requests Efficiently Using Human Computation","Bapat, Rucha (TU Delft Electrical Engineering, Mathematics and Computer Science)","Houben, Geert-Jan (graduation committee); Bozzon, Alessandro (mentor); Kucherbaev, Pavel (mentor); Wang, Huijuan (graduation committee); Delft University of Technology (degree granting institution)","2017","Chatbots are the text based conversational agents with which users interact in natural language. They are becoming more and more popular with the immense growth in messaging apps and tools to develop text based conversational agents. Despite of advances in Artificial Intelligence and Natural Language Processing, chatbots still struggle in accurately understanding user requests, thus providing wrong answers or no response. An effective solution to tackle this problem is involving human's capabilities in chatbot’s operations for understanding user requests. There are many existing systems using humans in chatbots but they are not capable to scale up with the increasing number of users. To address this problem, we provide insights in how to design such chatbot system having humans in the loop and how to involve humans efficiently.
We perform an extensive literature survey about chatbots, and human computation applied for a chatbot, to guide the design of our reference chatbot system. Then we address the problem of cold starting chatbot systems. We propose a methodology to generate high quality training data, with which, chatbot’s Natural Language Understanding (NLU) model can be trained, making a chatbot capable of handling user requests efficiently at run time. Finally we provide a methodology to estimate the reliability of black box NLU models based on the confidence threshold of their prediction functionality. We study and discuss the effect of parameters such as training data set size, type of intents on automatic NLU model.
Based on an experimental app developed during the research phase, raw smartphone GPS data was found to be unsuitable for video rendering. To improve this data, a Kalman Filter is used, in combination with a smoothing algorithm. The system has been designed to allow code sharing between iOS and Android where possible. The system has been implemented in Objective-C, Java, and TypeScript. Separating the system in three blocks enables code reuse which improves maintainability of the system. The filter has been integrated as shared code in the TypeScript implementation, which allows filtering to happen on the device. The user of the React Native Module developed has freedom to retrieve the unprocessed and processed data.
The system has been tested by means of unit tests in all three programming languages used. Tests have been executed using a continuous integration server, testing each pull request against the current code base to ensure quality. Part of the testing phase includes the React Native Module to be implemented in the client's smartphone application to demonstrate its use. The application has been sent to a number of test participants to collect data from different routes and activities. The project can be seen as a success since all important requirements have been successfully implemented.
preferences in mind. At the end of the project, we gave a demo to TNO and they were impressed with the results of the project. West IT was also happy with the delivered product and is interested in further developing it","Planning; Scheduling problem; Automated scheduling","en","bachelor thesis","","","","","","","","2017-07-04","","","","","",""