z-logo
open-access-imgOpen Access
An improved deep reinforcement learning approach for the dynamic job shop scheduling problem with random job arrivals
Author(s) -
Bin Luo,
Sibao Wang,
Bo Yang,
Lili Yi
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1848/1/012029
Subject(s) - reinforcement learning , computer science , job shop , job shop scheduling , scheduling (production processes) , artificial intelligence , mathematical optimization , flow shop scheduling , mathematics , computer network , routing (electronic design automation)
Deep reinforcement learning (DRL) method is a powerful way to solve the dynamic job shop scheduling problems (DJSSP). However, these DRL approaches are dispatching rules-based, meaning they are problem-specific, dependent on experience, and code effort. We propose a double loop deep Q-network (DLDQN) method with exploration loop and exploitation loop to solve DJSSP under random job arrivals, aiming to minimize the makespans of DJSSP. Simultaneously, by integrating into a single agent scheduling system, the proposed method could avoid complicated dispatching rules, enhancing the proposed method’s versatility. The experiment results have confirmed the superiority of our method compared to other algorithms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here