Meeting latency target in transient burst

A case on spark streaming

Conference Paper (2017)
Author(s)

Robert Birke (Zurich Lab)

Mathias Bjoerkqvist (Zurich Lab)

Evangelia Kalyvianaki (City University London)

Y. Chen (Zurich Lab)

Affiliation
External organisation
DOI related publication
https://doi.org/10.1109/IC2E.2017.17
More Info
expand_more
Publication Year
2017
Language
English
Affiliation
External organisation
Pages (from-to)
149-158
ISBN (electronic)
9781509058174

Abstract

Real-time processing of big data has become a core operation in various areas of business, such as extracting value from real-time social network data. Big data workloads in the wild show a strong temporal variability that not only poses the risk of slow responsiveness in data analysis, but also leads to a high risk of service outage. The recent development of batch streaming systems based on the MapReduce framework is shown effective on non-overloaded systems. However, little is known on how to enhance the performance of the batch streaming systems for bursty workloads. In this paper, we propose a latency-driven data controller, Dslash, which aims to process as much data as possible, while processing these as fast as the application target latency and system capacity allow. In particular, we implement Dslash on Spark Streaming - an emerging and complex batch streaming system. Dslash features include (i) placing data in an augmented distributed memory, (ii) shedding out-of-date data, (iii) improving the processing locality of Map tasks, and (iv) delaying data processing in transient overloads. Extensive evaluations on a large number of workloads show that Dslash can ensure stable and fast responsiveness compared to vanilla Spark Streaming systems.

No files available

Metadata only record. There are no files for this record.