What's the problem?

Studies on identifying usability problems in user tests

More Info
expand_more

Abstract

During the process of developing products difficulties in use (usability problems) are hard to predict. This especially holds for interactive products with embedded software. In user tests conducted during the design process analysts try to foresee which problems people will run into when using a product. Once they have identified and understood the problems, product developers may attempt to redesign the product so that the risk of users encountering usability problems will be minimized. Extracting usability problems from observed user behavior in a consistent manner has proven to be very difficult. Not only is it difficult for analysts to analyze all observations in the same way (this is called within-analyst consistency) but different analysts also tend to uncover different usability problems (across-analyst consistency). Within-consistency can be at stake when analysts become tired, less attentive or distracted during the analysis. Across-analyst consistency may also concern issues like differences in analysts' beliefs, values or preferences. In this thesis the focus is at consistency in identifying usability problems in user tests. The DEVAN (DEtailed Video ANalysis) procedure was developed to make such analyses documentable and inspectable. Next, DEVAN and its simplified variant SlimDEVAN were used in an academic setting and in a setting of professional usability labs. The aim was to determine to what extent (Slim)DEVAN exposes possible sources of inconsistency and manages to reduce inconsistencies caused by fatigue, lack of vigilance and distraction. In addition, DEVAN was applied to a comparative study in which the effect of using prototypes (instead of functioning products) in user tests is studied in detail. The thesis demonstrates how user test data analyses suffer from persistent inconsistencies. The use of (Slim)DEVAN allowed for detecting causes of persistent inconsistencies. In the comparative study the use of DEVAN revealed effects of using prototypes in user tests. For reducing less persistent inconsistencies advanced (automated) observation tools and more precise problem criteria are proposed. We suggest that further research should focus on consequences of inconsistencies in actual product development contexts. Eventually this will lead to more insight into the quality aspects of user tests, which in turn may lead to a decrease in the number of users muttering: "What's the problem? Why does this thing not do what I want it to do?"