
Delay-sensitive Task Scheduling with Deep Reinforcement Learning in Mobile-edge Computing Systems
Author(s) -
Hao Meng,
Daichong Chao,
Qianying Guo,
Xiaowei Li
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1229/1/012059
Subject(s) - computer science , reinforcement learning , mobile edge computing , distributed computing , q learning , edge computing , scheduling (production processes) , cloud computing , mobile device , queue , computer network , server , artificial intelligence , operating system , operations management , economics
Mobile-edge computing(MEC) is considered to be a new network architecture concept that provides cloud-computing capabilities and IT service environment for applications and services at the edge of the network, and it has the characteristics of low latency, high bandwidth and real-time access to wireless network information. In this paper, we mainly consider task scheduling and offloading problem in mobile devices, in which the computation data of tasks that are offloaded to MEC server have been determined. In order to minimize the average slowdown and average timeout period of tasks in buffer queue, we propose a deep reinforcement learning (DRL) based algorithm, which transform the optimization problem into a learning problem. We also design a new reward function to guide the algorithm to learn the offloading policy directly from the environment. Simulation results show that the proposed algorithm outperforms traditional heuristic algorithms after a period of training.