
Congestion‐aware adaptive decentralised computation offloading and caching for multi‐access edge computing networks
Author(s) -
Tefera Getenet,
She Kun,
Chen Min,
Ahmed Awais
Publication year - 2020
Publication title -
iet communications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.355
H-Index - 62
eISSN - 1751-8636
pISSN - 1751-8628
DOI - 10.1049/iet-com.2020.0630
Subject(s) - computer science , distributed computing , mobile edge computing , edge computing , computer network , cloudlet , computation offloading , backhaul (telecommunications) , network congestion , cloud computing , scalability , scheduling (production processes) , server , network packet , base station , operations management , database , economics , operating system
Multi‐access edge computing (MEC) has attracted much more attention to revolutionising smart communication technologies and Internet of Everything. Nowadays, smart end‐user devices are designed to execute sophisticated applications that demand more resources and explosively connected to the global ecosystem. As a result, the backhaul network traffic congestion grows enormously and user quality of experience is compromised as well. To address these challenges, the authors proposed congestion‐aware adaptive decentralised computing, caching, and communication framework which can orchestrate the dynamic network environment based on deep reinforcement learning for MEC networks. MEC is a paradigm shift that transforms cloud services and capabilities platform at the edge of ubiquitous radio access networks in close proximity to mobile subscribers. The framework can evolve to perform augmented decision‐making capabilities for the upcoming network generation. Hence, the problem is formulated using non‐cooperative game theory which is nondeterministic polynomial (NP)hard to solve and the authors show that the game admits a Nash equilibrium. In addition, they have constructed a decentralised adaptive scheduling algorithm to leverage the utility of each smart end‐user device. Therefore, their methodical observations using theoretical analysis and simulation results substantiate that the proposed algorithm can achieve ultra‐low latency, enhanced storage capability, low energy consumption, and scalable than the baseline scheme.