End User Involvement in Exploratory Test Automation for Web Applications

More Info
expand_more

Abstract

The traditional way of developing websites as hypertexts, which can be navigated link by link, is progressively giving way to the AJAX approach, in which the entire hypertext can be contained in a single web page. The resulting page has the advantage of offering navigation by loading only specific parts of the page - only the changing content. Conventional web crawlers, applications which explore web pages in a systematic way, are not able to browse AJAX pages. In order to overcome this barrier, which prevents the execution of automated tasks such as web indexing or mechanized tests, the Software Engineering Group at TU Delft has developed Crawljax, a tool capable of crawling AJAX pages. Crawljax already offers many possibilities. It provides default settings for simple page testing, but it can also be included in a Java project and be programmed to execute more complicated testing, or specific crawling in certain directions of the page. For example, Crawljax can include or exclude some buttons, check boxes, text areas and other elements of the page to help focus on a certain area to test. Through its various plugins it can benchmark websites, find invariants to use in regression tests, export a graphical representation of the states tree graph, and more. All of these possibilities are however restricted to Java programmers, willing to learn how to use a new tool to expand their limited crawling power. What Crawljax does not yet offer is a simple way, even for non-programmers, to create and execute specific test cases. Here we present an extension on Crawljax, a way to simplify the process of running crawling sessions and integrity tests on webpages. We call this system CrawlMan, the Crawljax Manager. CrawlMan uses components of Crawljax and his plugins and libraries, connected to a Graphical User Interface, in order to provide automated, repeatable crawling and testing. The application allows a basic user to start crawling a web page by simply inserting the selected URL, then shows a graphical representation of the result and uses it to guide the user in the refinement of the settings. The user can then crawl the same URL with more specific settings, inspect the new result and use the new suggestions to refine the settings, again and again. The obtained cycle, where the test results are used to improve the test itself, is the main project contribution. We evaluate our approach by means of analyzing the behavior of selected novice users during the execution of predefined tests.