Interpretability in Neural Information Retrieval

Doctoral Thesis (2025)
Author(s)

L. Lyu (TU Delft - Web Information Systems)

Contributor(s)

G. J. Houben – Promotor (TU Delft - Web Information Systems)

Avishek Anand – Promotor (TU Delft - Web Information Systems)

Research Group
Web Information Systems
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Web Information Systems
ISBN (electronic)
978-94-6518-007-6
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Neural information retrieval (IR) has transitioned from using classical human-defined relevance rules to leveraging complex neural models for retrieval tasks. While benefiting from advances in machine learning (ML), neural IR also inherits several drawbacks, including the opacity of the model’s decision-making process. This thesis aims to tackle this issue and enhance the transparency of neural IR models. Particularly, our work focuses on understanding which input features neural ranking models rely on to generate a specific ranking list. Our work draws inspiration from interpretable ML. However, we also recognize the unique aspects of IR tasks, which guide our development of methods specifically designed to interpret IR models....

Files

License info not available