z-logo
open-access-imgOpen Access
A Framework for Distributed Data-Parallel Execution in the Kepler Scientific Workflow System
Author(s) -
Jianwu Wang,
Daniel Crawl,
İlkay Altıntaş
Publication year - 2012
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2012.04.178
Subject(s) - computer science , scalability , workflow , kepler , distributed computing , usability , reuse , set (abstract data type) , interface (matter) , parallel computing , operating system , database , programming language , ecology , stars , bubble , maximum bubble pressure method , computer vision , biology
Distributed Data-Parallel (DDP) patterns such as MapReduce have become increasingly popular as solutions to facilitate data-intensive applications, resulting in a number of systems supporting DDP workows. Yet, applications or workows built using these patterns are usually tightly-coupled with the underlying DDP execution engine they select. We present a framework for distributed data-parallel execution in the Kepler scientic workow system that enables users to easily switch between different DDP execution engines. We describe a set of DDP actors based on DDP patterns and directors for DDP workow executions within the presented framework. We demonstrate how DDP workows can be easily composed in the Kepler graphic user interface through the reuse of these DDP actors and directors and how the generated DDP workows can be executed in different distributed environments. Via a bioinformatics usecase, we discuss the usability of the proposed framework and validate its execution scalability

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom