Die Einführung des VIVO-Systems an der HTWD befindet sich derzeit in der Testphase. Daher kann es noch zu anwendungsseitigen Fehlern kommen. Sollten Sie solche Fehler bemerken, können Sie diese gerne >>hier<< melden.
Sollten Sie dieses Fenster schließen, können Sie über die Schaltfläche "Feedback" in der Fußleiste weiterhin Meldungen abgeben.
Vielen Dank für Ihre Unterstützung!
A Cost-based Storage Format Selector for Materialization in Big Data Frameworks
Modern big data frameworks (such as Hadoop and Spark) allow multiple users to
do large-scale analysis simultaneously. Typically, users deploy Data-Intensive
Workflows (DIWs) for their analytical tasks. These DIWs of different users
share many common parts (i.e, 50-80%), which can be materialized to reuse them
in future executions. The materialization improves the overall processing time
of DIWs and also saves computational resources. Current solutions for
materialization store data on Distributed File Systems (DFS) by using a fixed
data format. However, a fixed choice might not be the optimal one for every
situation. For example, it is well-known that different data fragmentation
strategies (i.e., horizontal, vertical or hybrid) behave better or worse
according to the access patterns of the subsequent operations.
In this paper, we present a cost-based approach which helps deciding the most
appropriate storage format in every situation. A generic cost-based storage
format selector framework considering the three fragmentation strategies is
presented. Then, we use our framework to instantiate cost models for specific
Hadoop data formats (namely SequenceFile, Avro and Parquet), and test it with
realistic use cases. Our solution gives on average 33% speedup over
SequenceFile, 11% speedup over Avro, 32% speedup over Parquet, and overall, it
provides upto 25% performance gain.