D.H.J. Epema
172 records found
1
...
Web3 is emerging as the new Internet-interaction model that facilitates direct collaboration between strangers without a need for prior trust between network participants and without central authorities. However, one of its shortcomings is the lack of a defense mechanism against
...
The solar industry in residential areas has been witnessing an astonishing growth worldwide. At the heart of this transformation, affecting the edge of the electricity grid, reside smart inverters (SIs). These IoT-enabled devices aim to introduce a certain degree of intelligence
...
The concept of the internet of energy (IoE) emerged as an innovative paradigm to encompass all the complex and intertwined notions relevant to the transition of current smart grids towards more decarbonization, digitalization and decentralization. With a focus on the two last asp
...
Peer-to-Peer (P2P) energy trading, which allows energy consumers/producers to directly trade with each other, is one of the new paradigms driven by the decarbonization, decentralization, and digitalization of the energy supply chain. Additionally, the rise of blockchain technolog
...
Lightning, the prevailing solution to Bitcoin's scalability issue, uses onion routing to hide senders and recipients of payments. Yet, the path between the sender and the recipient along which payments are routed is selected such that it is short, cost efficient, and fast. The lo
...
Existing digital identity management systems fail to deliver the desirable properties of control by the users of their own identity data, credibility of disclosed identity data, and network-level anonymity. The recently proposed Self-Sovereign Identity (SSI) approach promises to
...
Large data centers are currently the mainstream infrastructures for big data processing. As one of the most fundamental tasks in these environments, the efficient execution of distributed data operators (e.g., join and aggregation) are still challenging current data systems, and
...
The demand for additional performance due to the rapid increase in the size and importance of data-intensive applications has considerably elevated the complexity of computer architecture. In response, systems offer pre-determined behaviors based on heuristics and then expose a l
...
Cloud schedulers that allocate resources exclusively to single workflows are not work-conserving as they may be forced to leave gaps in their schedules because of the precedence constraints in the workflows. Thus, they may lead to a waste of financial resources. This problem can
...
Elasticity is one of the main features of cloud computing allowing customers to scale their resources based on the workload. Many autoscalers have been proposed in the past decade to decide on behalf of cloud customers when and how to provision resources to a cloud application ba
...
Workflow schedulers often rely on task runtime estimates when making scheduling decisions, and they usually target the scheduling of a single workflow or batches of workflows. In contrast, in this paper, we evaluate the impact of the absence or limited accuracy of task runtime es
...
When multiple data-processing frameworks with time-varying workloads are simultaneously present in a single cluster or data-center, an apparent goal is to have them experience equal performance, expressed in whatever performance metrics are applicable. In modern data-center envir
...
Better Safe than Sorry
Grappling with Failures of In-Memory Data Analytics Frameworks
Providing fault-tolerance is of major importance for data analytics frameworks such as Hadoop and Spark, which are typically deployed in large clusters that are known to experience high failures rates. Unexpected events such as compute node failures are in particular an important
...
Efficient execution of distributed database operators such as joining and aggregating is critical for the performance of big data analytics. With the increase of the compute speedup of modern CPUs, reducing the network
communication time of these operators in large systems is ...
communication time of these operators in large systems is ...
In recent years, many distributed graph-processing systems have been designed and developed to analyze large-scale graphs. For all distributed graph-processing systems, partitioning graphs is a key part of processing and an important aspect to achieve good processing performance.
...
Simplifying the task of resource management and scheduling for customers, while still delivering complex Quality-of-Service (QoS), is key to cloud computing. Many autoscaling policies have been proposed in the past decade to decide on behalf of cloud customers when and how to pro
...
When Game Becomes Life
The Creators and Spectators of Online Game Replays and Live Streaming
Online gaming franchises such as World of Tanks, Defense of the Ancients, and StarCraft have attracted hundreds of millions of users who, apart from playing the game, also socialize with each other through gaming and viewing gamecasts. As a form of User Generated Content (UGC), g
...
Tyrex
Size-Based Resource Allocation in MapReduce Frameworks
Many large-scale data analytics infrastructures are employed for a wide variety of jobs, ranging from short interactive queries to large data analysis jobs that may take hours or even days to complete. As a consequence, data-processing frameworks like MapReduce may have workloads
...
Graph processing is increasingly used in a variety of domains, from engineering to logistics and from scientific computing to online gaming. To process graphs efficiently, GPU-enabled graph-processing systems such as TOTEM and Medusa exploit the GPU or the combined CPU+GPU capabi
...
The Dutch Advanced School for Computing and Imaging has built five generations of a 200-node distributed system over nearly two decades while remaining aligned with the shifting computer science research agenda. The system has supported years of award-winning research, underlinin
...