Automatically Testing a Conversational Crowd Computing Platform

More Info
expand_more

Abstract

The rise in the use of crowd computing platforms led to the birth of Dandelion, a conversational crowd computing platform developed at TU Delft with the main goals being to connect students with researchers and to allow students to report on their well-being by using a friendly interface. Dandelion was tested manually up to the time of drafting this paper; thus, the primary motivation behind this paper is to ensure the robustness and measure the responsiveness of Dandelion.

Robustness was exercised by utilizing a simulated user behaving unexpectedly. The testing framework then classifies the behaviour of Dandelion according to the C.R.A.S.H. scale. The testing framework is validated by altering Dandelion's behaviour and ensuring that the test results reflect the change. Furthermore, a lower bound to the run time of a task will be estimated using a Multi-Agent System (M.A.S.) simulation on Dandelion.

Upon verifying the correctness of the robustness test, a faulty assumption was uncovered on which the user's input validation was based. Furthermore, the M.A.S. simulation run estimated a lower bound of $\approx 5.788$ seconds, while revealing a lack of user's input validation before posting them in the database.