Enabling FPGA Memory Management for Big Data Applications Using Fletcher

More Info
expand_more

Abstract

vailability of FPGAs is increasing due to cloud service offerings. In the wake of a new in-memory storage format specification, Apache Arrow, FPGAs are increasingly interesting for data processing acceleration in the big data domain. The Fletcher framework can be used to easily develop FPGA accelerated applications that access data stored in Apache Arrow format, while providing throughput near the system’s limit. The current implementation of Fletcher has limited support for memory management of FPGA-local memory, with one of the biggest limitations being that memory can only be used once.
This thesis explores several memory management techniques which could be suitable for use on FPGAs in a big data context. Paged memory is implemented on FPGA within the Fletcher framework in order to facilitate this memory management. The implemented system takes less than 5 % of a data centre FPGA card’s resources (Xilinx UltraScale+ VU9P). Experiments show that the paged memory provides throughput of over 99.7 % of the system’s throughput for linear memory accesses. Random memory access throughput for paged memory drops to between 30 % and 90 % of the system’s original throughput, depending on request size. The performance drop can be lightened or even prevented by employing suitable address-translation caches.

Files

Report.pdf
(.pdf | 0.731 Mb)