Searched for: author%3A%22Anand%2C+A.%22
(1 - 13 of 13)
document
Wallat, Jonas (author), Jatowt, Adam (author), Anand, A. (author)
Large language models (LLMs) have recently gained significant attention due to their unparalleled zero-shot performance on various natural language processing tasks. However, the pre-Training data utilized in LLMs is often confined to a specific corpus, resulting in inherent freshness and temporal scope limitations. Consequently, this raises...
conference paper 2024
document
Funke, Thorben (author), Khosla, M. (author), Rathee, Mandeep (author), Anand, A. (author)
With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related...
journal article 2023
document
Rudra, Koustav (author), Fernando, Zeon Trevor (author), Anand, A. (author)
Pre-trained contextual language models such as BERT, GPT, and XLnet work quite well for document retrieval tasks. Such models are fine-tuned based on the query-document/query-passage level relevance labels to capture the ranking signals. However, the documents are longer than the passages and such document ranking models suffer from the token...
journal article 2023
document
Anand, A. (author), Sen, Procheta (author), Saha, Sourav (author), Verma, Manisha (author), Mitra, Mandar (author)
This tutorial presents explainable information retrieval (ExIR), an emerging area focused on fostering responsible and trustworthy deployment of machine learning systems in the context of information retrieval. As the field has rapidly evolved in the past 4-5 years, numerous approaches have been proposed that focus on different access modes,...
conference paper 2023
document
Lyu, L. (author), Anand, A. (author)
This paper proposes a novel approach towards better interpretability of a trained text-based ranking model in a post-hoc manner. A popular approach for post-hoc interpretability text ranking models are based on locally approximating the model behavior using a simple ranker. Since rankings have multiple relevance factors and are aggregations...
conference paper 2023
document
Abolfazli, Amir (author), Spiegelberg, Jakob (author), Anand, A. (author), Palmer, Gregory (author)
Configurable software systems have become increasingly popular as they enable customized software variants. The main challenge in dealing with configuration problems is that the number of possible configurations grows exponentially as the number of features increases. Therefore, algorithms for testing customized software have to deal with the...
conference paper 2023
document
Leonhardt, L.J.L. (author), Rudra, Koustav (author), Anand, A. (author)
Neural document ranking models perform impressively well due to superior language understanding gained from pre-Training tasks. However, due to their complexity and large number of parameters these (typically transformer-based) models are often non-interpretable in that ranking decisions can not be clearly attributed to specific parts of the...
journal article 2023
document
Wallat, Jonas (author), Beringer, Fabian (author), Anand, Abhijit (author), Anand, A. (author)
Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT...
conference paper 2023
document
Zhu, P. (author), Wang, Z. (author), Yang, J. (author), Hauff, C. (author), Anand, A. (author)
Quality control is essential for creating extractive question answering (EQA) datasets via crowdsourcing. Aggregation across answers, i.e. word spans within passages annotated, by different crowd workers is one major focus for ensuring its quality. However, crowd workers cannot reach a consensus on a considerable portion of questions. We...
conference paper 2022
document
Anand, Abhijit (author), Leonhardt, Jurek (author), Rudra, Koustav (author), Anand, A. (author)
Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine tuning. This paper proposes a simple yet effective method to improve ranking performance on...
conference paper 2022
document
Wang, Yumeng (author), Lyu, Lijun (author), Anand, A. (author)
Contextual ranking models based on BERT are now well established for a wide range of passage and document ranking tasks. However, the robustness of BERT-based ranking models under adversarial inputs is under-explored. In this paper, we argue that BERT-rankers are not immune to adversarial attacks targeting retrieved documents given a query....
conference paper 2022
document
Erlei, Alexander (author), Das, Richeek (author), Meub, Lukas (author), Anand, A. (author), Gadiraju, Ujwal (author)
As algorithms are increasingly augmenting and substituting human decision-making, understanding how the introduction of computational agents changes the fundamentals of human behavior becomes vital. This pertains to not only users, but also those parties who face the consequences of an algorithmic decision. In a controlled experiment with 480...
conference paper 2022
document
Zhang, Zijian (author), Setty, Vinay (author), Anand, A. (author)
We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in...
conference paper 2022
Searched for: author%3A%22Anand%2C+A.%22
(1 - 13 of 13)