z-logo
Premium
Special section on autonomic cloud computing: technologies, services, and applications
Author(s) -
Ranjan Rajiv,
Buyya Rajkumar,
Parashar Manish
Publication year - 2011
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.1865
Subject(s) - cloud computing , computer science , utility computing , grid computing , cloud testing , virtualization , elasticity (physics) , end user computing , distributed computing , virtual machine , cloud computing security , scalability , pooling , computer security , operating system , grid , materials science , geometry , mathematics , artificial intelligence , composite material
Welcome to the special issue of Concurrency and Computation: Practice and Experience (CCPE) journal. This special issue compiles a number of excellent technical contributions that significantly advance the state-of-the-art in autonomic cloud computing. Cloud computing [1, 2] is an emerging utility computing model that allows users to dynamically access, select, and configure a large pool of IT resources (virtual machine templates, storage, and networking elements) and deliver them as ‘computing utilities’ to consumers in a payas-you-go manner. Several vendors have emerged in this space including IBM, VMware, Microsoft, Manjrasoft, and Yahoo. This model of computing is quite attractive, especially for small and medium sized enterprises, as it allows them to focus on consuming or offering services on top of cloud infrastructure. At high-level, cloud computing might not seem radically different from the existing paradigms: World Wide Web, grid computing, and cluster computing. However, key differentiators of cloud computing are its technical characteristics such as on-demand resource pooling or rapid elasticity, self-service, almost infinite scalability, end-to-end virtualization support, and robust support of resource usage metering and billing. Additionally, nontechnical differentiators include services that are offered under pay-as-you-go-model, guaranteed Service Level Agreement (SLA), faster time to deployments, lower upfront costs, little or no maintenance overhead, and environment friendliness. Unpredictability is a fact in a distributed computing environment, and the Cloud is no exception. Performance unpredictability [3] in the Cloud is in fact a major issue for many users and it is coined as one of the major obstacles for cloud computing. For instance, researchers (biologists, physicists, finance analysts, etc.) expect guaranteed performance for their experiments, independent of the current workload and state [4] of IT resources of the Cloud, because this is key to repeatability of results. Other examples are small and medium sized enterprises (gaming company, web application providers) that want strict assurance on SLA; for example, an end-user request for a web page or multimedia content has to be served within the agreed time-limit. Hence, it is highly important for Cloud vendors that they have the ability to offer guaranteed SLAs based on performance metrics — such as response time and throughput. Interestingly, vendors seem to base their SLAs on availability of their offering, while completely ignoring response time and throughput. Hence, it is clear that dealing with performance unpredictability is critical to exploiting the full potential of clouds. In this special issue, we have tried to compile some high quality papers that exhaustively deal with some of the aforementioned issues. Next, we briefly describe the technical contributions, which were selected for publication in this special issue. All of the selected papers underwent a rigorous peer-review process. The end-to-end QoS negotiation for SLA establishment for composite services involves compound multiparty negotiations in which the composite service provider concurrently negotiates with multiple candidates for each atomic service, selecting the one that best satisfies the atomic service QoS preferences while ensuring that the end-to-end QoS requirements are also fulfilled. It is necessary to derive the atomic utility boundaries from the global utility boundary to be able to negotiate with potential candidates. Additionally, there has to be a mechanism for updating these boundaries in subsequent negotiation rounds based upon the individual negotiation outcomes. To counter these complexities, in paper [5] titled ‘Establishing Composite SLAs through Concurrent QoS Negotiation with Surplus Redistribution’, Richter et al. propose an algorithm for the decomposition of global utility boundary into atomic service utility boundaries, and the surplus redistribution from successful negotiation outcomes among the remaining negotiations. The proposed mechanism

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here