Automatically Testing a Conversational Crowd Computing Platform

Bachelor Thesis (2021)
Author(s)

O. Kanaris (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

S. Qiu – Mentor (TU Delft - Web Information Systems)

Ujwal Gadiraju – Mentor (TU Delft - Web Information Systems)

Jie Yang – Mentor (TU Delft - Web Information Systems)

J.W. Böhmer – Graduation committee member (TU Delft - Algorithmics)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Orestis Kanaris
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Orestis Kanaris
Graduation Date
02-07-2021
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The rise in the use of crowd computing platforms led to the birth of Dandelion, a conversational crowd computing platform developed at TU Delft with the main goals being to connect students with researchers and to allow students to report on their well-being by using a friendly interface. Dandelion was tested manually up to the time of drafting this paper; thus, the primary motivation behind this paper is to ensure the robustness and measure the responsiveness of Dandelion.

Robustness was exercised by utilizing a simulated user behaving unexpectedly. The testing framework then classifies the behaviour of Dandelion according to the C.R.A.S.H. scale. The testing framework is validated by altering Dandelion's behaviour and ensuring that the test results reflect the change. Furthermore, a lower bound to the run time of a task will be estimated using a Multi-Agent System (M.A.S.) simulation on Dandelion.

Upon verifying the correctness of the robustness test, a faulty assumption was uncovered on which the user's input validation was based. Furthermore, the M.A.S. simulation run estimated a lower bound of $\approx 5.788$ seconds, while revealing a lack of user's input validation before posting them in the database.

Files

Research_project_new.pdf
(pdf | 0.37 Mb)
License info not available