Premium
Investigation of a Model‐Based Deep Reinforcement Learning Controller Applied to an Air Separation Unit in a Production Environment
Author(s) -
Blum Nicolas,
Krespach Valentin,
Zapp Gerhard,
Oehse Christian,
Rehfeldt Sebastian,
Klein Harald
Publication year - 2021
Publication title -
chemie ingenieur technik
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.365
H-Index - 36
eISSN - 1522-2640
pISSN - 0009-286X
DOI - 10.1002/cite.202100094
Subject(s) - reinforcement learning , flexibility (engineering) , model predictive control , controller (irrigation) , nonlinear system , computer science , control engineering , control theory (sociology) , process (computing) , control (management) , production (economics) , work (physics) , artificial intelligence , engineering , mathematics , mechanical engineering , statistics , physics , macroeconomics , quantum mechanics , agronomy , economics , biology , operating system
The need for load flexibility and increased efficiency of energy‐intensive processes has become more and more important in recent years. Control of the process variables plays a decisive role in maximizing the efficiency of a plant. The widely used control models of linear model predictive controllers (LMPC) are only partly suitable for nonlinear processes. One possibility for improvement is machine learning. In this work, one approach for a purely data‐driven controller based on reinforcement learning is explored at an air separation plant (ASU) in productive use. The approach combines the model predictive controller with a data‐generated nonlinear control model. The resulting controller and its control performance are examined in more detail on an ASU in real operation and compared with the previous LMPC solution. During the tests, stable behavior of the new control concept could be observed for several weeks in productive operation.