Improving Search Relevance Feedback through Human Centered Design

More Info
expand_more

Abstract


Artificial intelligence (AI) is expected to play a transformational role in health and wellbeing. Search (i.e. information retrieval) technologies already play a significant role in healthcare research and practice. Relevance feedback in Search is vital for system evaluation and improvements. However, in small user scale contexts, the exploitation of user behaviors may not infer valid relevance judgments. Therefore, engaging users to provide such feedback explicitly is essential for improving search performance (i.e. effectiveness). However, previous research has found that users are generally reluctant to provide explicit feedback in digital environments, and the willingness decreases overtime in some experiments. In collaboration with myTomorrows, an Amsterdam-based pharma-tech company, this Master thesis aims to find answers to the challenge mentioned above through a specific context of myTomororws AI-powered treatment Search which has the urgent need for engaging healthcare professionals (HCPs) in providing relevance feedback on search results (e.g. Clinical Trials and Expanded Access Programs) for system evaluation and improvements. Through Human Centered Design methods such as interviews, observations, and speed dates, the project yielded a future myTomorrows Search design enhanced with three relevance feedback collection concepts. As research materials, the concepts were tested and evaluated by nine HCPs from three countries (the Netherlands, China, and Brazil). The user study results indicate that embedding utility, as the motivator, in relevance feedback collection appeals to HCPs more than using motivators such as altruism or enjoyment. Moreover, the best point of user engagement is identified as the moment between users finishing the examination of information and starting the next ones. Additionally, this study generalized the project process and user study insights into a four-stage guide for designing explicit feedback collection in text-base Search. Although it remains unvalidated, this guide has the potential to apply to other small user scale contexts, guiding or inspiring user researchers and designers to design explicit user feedback collection in Search.