
Dynamic performance–Energy tradeoff consolidation with contention-aware resource provisioning in containerized clouds
Author(s) -
Rewer M. Canosa-Reyes,
Andrei Tchernykh,
Jorge M. Cortés-Mendoza,
Bernardo Pulido-Gaytán,
Raúl Rivera-Rodríguez,
José E. Lozano-Rizk,
Eduardo Morales,
Harold Enrique Castro Barrera,
Carlos Barrios-Hernandez,
Favio Medrano-Jaimes,
Arutyun Avetisyan,
Mikhail Babenko,
Alexander Yu. Drozdov
Publication year - 2022
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0261856
Subject(s) - computer science , provisioning , cloud computing , bin packing problem , energy consumption , quality of service , efficient energy use , distributed computing , scheduling (production processes) , virtual machine , resource allocation , software deployment , heuristics , virtualization , computer network , operating system , operations management , bin , ecology , algorithm , electrical engineering , biology , engineering , economics
Containers have emerged as a more portable and efficient solution than virtual machines for cloud infrastructure providing both a flexible way to build and deploy applications. The quality of service, security, performance, energy consumption, among others, are essential aspects of their deployment, management, and orchestration. Inappropriate resource allocation can lead to resource contention, entailing reduced performance, poor energy efficiency, and other potentially damaging effects. In this paper, we present a set of online job allocation strategies to optimize quality of service, energy savings, and completion time, considering contention for shared on-chip resources. We consider the job allocation as the multilevel dynamic bin-packing problem that provides a lightweight runtime solution that minimizes contention and energy consumption while maximizing utilization. The proposed strategies are based on two and three levels of scheduling policies with container selection, capacity distribution, and contention-aware allocation. The energy model considers joint execution of applications of different types on shared resources generalized by the job concentration paradigm. We provide an experimental analysis of eighty-six scheduling heuristics with scientific workloads of memory and CPU-intensive jobs. The proposed techniques outperform classical solutions in terms of quality of service, energy savings, and completion time by 21.73–43.44%, 44.06–92.11%, and 16.38–24.17%, respectively, leading to a cost-efficient resource allocation for cloud infrastructures.