Premium
Task scheduling based on deep reinforcement learning in a cloud manufacturing environment
Author(s) -
Dong Tingting,
Xue Fei,
Xiao Chuangbai,
Li Juntao
Publication year - 2020
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.5654
Subject(s) - reinforcement learning , computer science , cloud computing , scheduling (production processes) , distributed computing , cloud manufacturing , two level scheduling , job shop scheduling , computation , dynamic priority scheduling , artificial intelligence , server , schedule , mathematical optimization , algorithm , computer network , operating system , mathematics
Summary Cloud manufacturing promotes the transformation of intelligence for the traditional manufacturing mode. In a cloud manufacturing environment, the task scheduling plays an important role. However, as the number of problem instances increases, the solution quality and computation time always go against. Existing task scheduling algorithms can get local optimal solutions with the high computational cost, especially for large problem instances. To tackle this problem, a task scheduling algorithm based on a deep reinforcement learning architecture (RLTS) is proposed to dynamically schedule tasks with precedence relationship to cloud servers to minimize the task execution time. Meanwhile, the Deep‐Q‐Network, as a kind of deep reinforcement learning algorithms, is employed to consider the problem of complexity and high dimension. In the simulation, the performance of the proposed algorithm is compared with other four heuristic algorithms. The experimental results show that RLTS can be effective to solve the task scheduling in a cloud manufacturing environment.