Premium
Special issue: 2011 international conference on cloud and green computing (CGC2011)
Author(s) -
Chen Jinjun,
Liu Jianxun
Publication year - 2013
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.3092
Subject(s) - cloud computing , computer science , data science , concurrency , green computing , field (mathematics) , world wide web , distributed computing , mathematics , pure mathematics , operating system
This special issue of Concurrency and Computation: Practice and Experience contains selected highquality papers from the 2011 International Conference on Cloud and Green Computing (CGC2011) which was held on December 11–13, 2011 in Sydney, Australia [1]. The CGC conference series aims to provide an international forum for the presentation and discussion of research and development trends regarding cloud and green computing. CGC2011 attracted many international attendants, allowing deep discussion and the exchange of ideas and results related to ongoing research among attendants. Many research and development efforts have been made in the field of cloud and green computing such as [2–11]. More and more people from different areas are trying to facilitate the techniques from their respective areas to tackle tough issues in cloud and green computing such as resource scheduling, security and privacy, service provision, power aware computation and storage, and data service query issues. This special issue aims to accommodate a range of papers from different perspectives and areas to provide some different views and hints for cloud and green computing research. This special issue contains eight papers based on those that were presented at CGC2011. They are listed as [12–19]. Research problems in these papers have been analyzed systematically, and for specific approaches or models, evaluation has been performed to demonstrate their feasibility and advantages. The papers were selected on this basis and also peer reviewed thoroughly. They are summarized in the succeeding texts. Paper [12] develops an adaptive service selection method for cross-cloud service composition. It can dynamically select proper services with near-optimal performance for adapting to changes in time. A case study is presented to demonstrate the performance. Paper [13] attempts to identify the role of contextual properties of enterprise systems architecture in relation to service migration to cloud computing. It points out that cloud computing requires consumers to relinquish their ownership of and control over most architectural elements to cloud providers. The simulation is conducted to evaluate the feasibility of the proposed method. Paper [14] proposes an economic and energy aware cloud cost model in this regard. The model supports the decision-making process to be applied with business cases and enables cloud consumers and cloud providers to define their own business strategies and to analyze the respective impact on their business. Paper [15] focuses on latency in global cloud service provision. This paper investigates if latency in terms of simple ping measurements can be used as an indicator for other QoS parameters such as jitter and throughput. Corresponding experiments are conducted to demonstrate performance. Paper [16] presents a number of policies that can be applied to multiuse clusters where computers are shared between interactive users and high throughput computing. The paper also evaluates policies by trace-driven simulations to determine the effects on power consumed by the high throughput workload and impact on high throughput users. The experiment results demonstrate significant power saving with proposed policies. Paper [17] designs an efficient data and task co-scheduling strategy for scheduling datasets and tasks together. Simulation was conducted on the well-known Tianhe supercomputer platform. Simulation results demonstrate that the proposed strategy can effectively improve workflows performance while reducing the total volume of data transfer across data centers.