
Service Migration in Mobile Edge Computing Based on Reinforcement Learning
Author(s) -
Chao Fan,
Li Li
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1584/1/012058
Subject(s) - computer science , mobile edge computing , markov decision process , quality of service , computer network , cloud computing , distributed computing , server , reinforcement learning , enhanced data rates for gsm evolution , decision model , edge computing , markov process , artificial intelligence , machine learning , operating system , statistics , mathematics
Mobile edge computing (MEC) provides users with cloud computing capabilities at the edge of the mobile network, which effectively reduces network latency and improves the experience of end-users. User mobility in MEC is a factor that cannot be ignored, and mobility management is an urgent problem to be solved. Service migration is an effective way to manage user mobility. However, it is not appropriate to perform migration too frequently due to expensive migration overhead. In this paper, we propose a service migration decision algorithm to decide whether to migrate or not when the user moves out of the coverage of the offloaded MEC server. Markov decision process (MDP) is used to model the service migration decision problem. We comprehensively consider the distance between users and services, resource requirements of the services, and resource availability of the MEC servers. On the premise of considering both migration costs and resource conditions, aiming at maximizing quality of service (QoS), the reward function of MDP is defined, and the migration decision strategy is solved by Q-learning algorithm. Finally, our proposed migration decision algorithm is validated by simulation.