Ranking Fusion Functions in Neural Ranking Models

The Impact of Ranking Fusion Function on Neural Ranking Models with Fast Forward Indexes

More Info
expand_more

Abstract

The research explores the impact of rank fusion functions within the retrieve-and-rerank framework with Fast-Forward Indexes. Using the BM25 sparse model for retrieval and TCT-ColBERT dense model for semantic score computation, various rank fusion functions are experimented for the interpolation stage. The interpolated rank in relation to the semantic and lexical ranks is explored. Parametric approach allows to easily adjust the influence of sparse and dense models on the final rank. On the other hand, non-parametric approach lacks flexibility and maintains an equal weight of sparse and dense scores by default. Moreover, the ranking effectiveness and latency are measured to further evaluate each function. Due to flexibility of parametric functions, convex rank fusion function and its normalized variants yield the best trade-off in latency and ranking effectiveness followed by reciprocal rank fusion. On the contrary, non-parametric functions, namely Inverse Square Rank Reciprocal, combMNZ, and Condorcet Fuse, generally performs worse.