Does LRU always outperform FIFO?
A mathematical and simulation-based approach to identify better performing caching algorithms
V.N.W. Terlouw (TU Delft - Electrical Engineering, Mathematics and Computer Science)
C.E. Groenland – Mentor (TU Delft - Discrete Mathematics and Optimization)
T.W.C. Vroegrijk – Graduation committee member (TU Delft - Analysis)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Caching plays a crucial role in keeping the internet accessible. Many algorithms exist to determine which page should be evicted from the cache, but most analyses of these algorithms only focus on the competitive ratio. The competitive ratio, however, cannot identify which algorithm performs well in practice and which only performs well on paper.
The goal of this research is to study how the performance of deterministic and randomized caching algorithms can be compared, and how the algorithms behave on differently structured request sequences. A purely mathematical analysis is combined with simulations of uniformly random, recency-based, and favorite element request sequences.
Sleator and Tarjan [6] claim that the FIFO and LRU algorithms are competitive and provide a proof for the LRU algorithm. This research provides a proof for the competitiveness of the FIFO algorithm.
In addition, Panagiotou and Souza [4] use the characteristic vector to determine the number of cache misses for LRU. This research shows that for the FIFO algorithm, both an upper and lower bound can be obtained by using the characteristic vector.
Finally, simulations show that different request structures lead to different well-performing algorithms. The LFU algorithm, although not competitive, performs best on the favorite element request sequences, while LRU performs best on random and recency-based request sequences.
More broadly, the results show that a higher cache size yields a higher ratio of the number of cache misses for the algorithms and optimal solution, while a larger universe leads to a lower ratio.