z-logo
open-access-imgOpen Access
Gantry Work Cell Scheduling through Reinforcement Learning with Knowledge-guided Reward Setting
Author(s) -
Xinyan Ou,
Qing Chang,
Jorge Arinez,
Jing Zou
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2800641
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
In this paper, a manufacturing work cell utilizing gantries to move between machines for loading and unloading materials/parts is considered. The production performance of the gantry work cell highly depends on the gantry movements in real operation. This paper formulates the gantry scheduling problem as a reinforcement learning problem, in which an optimal gantry moving policy is solved to maximize the system output. The problem is carried out by the Q-learning algorithm. The gantry system is analyzed and its real-time performance is evaluated by permanent production loss and production loss risk, which provide a theoretical base for defining reward function in the Q-learning algorithm. A numerical study is performed to demonstrate the effectiveness of the proposed policy by comparing with the first-comefirst-served policy.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom