
Off-line Deep Reinforcement Learning for Maintenance Optimization
Author(s) -
Hamed Khorasgani,
Ahmed Farhat,
Haiyan Wang,
Chetan Gupta
Publication year - 2021
Publication title -
proceedings of the annual conference of the prognostics and health management society
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.18
H-Index - 11
ISSN - 2325-0178
DOI - 10.36001/phmconf.2021.v13i1.3009
Subject(s) - reinforcement learning , computer science , artificial intelligence , machine learning , profit (economics) , preventive maintenance , estimation , reliability engineering , risk analysis (engineering) , operations research , engineering , medicine , systems engineering , economics , microeconomics
Several machine learning and deep learning frameworks have been proposed to solve remaining useful life estimation and failure prediction problems in recent years. Having access to the remaining useful estimation or the likelihood of failure in the near future help operators to assess the operating conditions and, therefore, making better repair and maintenance decisions. However, many operators believe remaining useful life estimation and failure prediction solutions are incomplete answers to the maintenance challenge. They would argue that knowing the likelihood of failure in a given time interval or having access to an estimation of the remaining useful life are not enough to make maintenance decisions which minimize the cost while keeping them safe. In this paper, we present a maintenance framework based on off-line deep reinforcement learning which instead of providing information such as likelihood of failure, suggests actions such as “continue the operation” or “visit a repair shop” to the operators in order to maximize the overall profit. Using off-line reinforcement learning makes it possible to learn the optimum maintenance policy from historical data without relying on expensive simulators. We demonstrate the application of our solution in a case study using NASA C-MAPSS dataset.