From a User Study to a Valid Claim

How to Test your Hypothesis and Avoid Common Pitfalls

Conference Paper (2017)
Author(s)

Niels de Hoon (TU Delft - Computer Graphics and Visualisation)

E. Eisemann (TU Delft - Computer Graphics and Visualisation)

Anna Vilanova Bartroli (TU Delft - Computer Graphics and Visualisation)

Research Group
Computer Graphics and Visualisation
DOI related publication
https://doi.org/10.2312/eurorv3.20171110
More Info
expand_more
Publication Year
2017
Language
English
Research Group
Computer Graphics and Visualisation
Pages (from-to)
25-28
ISBN (electronic)
978-3-03868-041-3

Abstract

The evaluation of visualization methods or designs often relies on user studies. Apart from the difficulties involved in the design of the study itself, the existing mechanisms to obtain sound conclusions are often unclear. In this work, we review and summarize some of the common statistical techniques that can be used to validate a claim in the scenarios that are commonly present in user studies in visualization, i.e., hypothesis testing. Usually, the number of participants is small and the mean and variance of the distribution are not known. Therefore, we will focus on the techniques that are adequate within these limitations. Our aim for this paper is to clarify the goals and limitations of hypothesis testing from a user study perspective, that can be interesting for the visualization community. We provide an overview of the most common mistakes made when testing a hypothesis that can lead to erroneous claims. We also present strategies to avoid those.

No files available

Metadata only record. There are no files for this record.