z-logo
Premium
Applying reinforcement learning towards automating resource allocation and application scalability in the cloud
Author(s) -
Barrett Enda,
Howley Enda,
Duggan Jim
Publication year - 2012
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.2864
Subject(s) - reinforcement learning , cloud computing , computer science , scalability , virtual machine , resource allocation , markov decision process , distributed computing , curse of dimensionality , virtualization , offline learning , artificial intelligence , markov process , online learning , operating system , computer network , multimedia , statistics , mathematics
SUMMARY Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver computational resources by means of virtualisation technologies. These technologies allow multiple independent virtual machines to reside in apparent isolation on the same physical host. Dynamically scaling applications running on IaaS clouds can lead to varied and unpredictable results because of the performance interference effects associated with co‐located virtual machines. Determining appropriate scaling policies in a dynamic non‐stationary environment is non‐trivial. One principle advantage exhibited by IaaS clouds over their traditional hosting counterparts is the ability to scale resources on‐demand. However, a problem arises concerning resource allocation as to which resources should be added and removed when the underlying performance of the resource is in a constant state of flux. Decision theoretic frameworks such as Markov Decision Processes are particularly suited to decision making under uncertainty. By applying a temporal difference, reinforcement learning algorithm known as Q‐learning, optimal scaling policies can be determined. Additionally, reinforcement learning techniques typically suffer from curse of dimensionality problems, where the state space grows exponentially with each additional state variable. To address this challenge, we also present a novel parallel Q‐learning approach aimed at reducing the time taken to determine optimal policies whilst learning online. Copyright © 2012 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here