Premium
Differences and similarities between reinforcement learning and the classical optimal control framework
Author(s) -
Gottschalk Simon,
Burger Michael
Publication year - 2019
Publication title -
pamm
Language(s) - English
Resource type - Journals
ISSN - 1617-7061
DOI - 10.1002/pamm.201900390
Subject(s) - reinforcement learning , computer science , artificial neural network , position (finance) , controller (irrigation) , optimal control , control (management) , reinforcement , degrees of freedom (physics and chemistry) , value (mathematics) , artificial intelligence , actuator , control theory (sociology) , machine learning , mathematical optimization , mathematics , engineering , physics , structural engineering , finance , quantum mechanics , agronomy , economics , biology
In this contribution , we discuss Reinforcement Learning as an alternative way to solve optimal control problems. Especially for biomechanical models, well‐established classical techniques can become complex and time‐consuming, because biomechanical models have often much more actuators than degrees of freedom. Furthermore, the solution of such a technique is normally only applicable to this specific setting. This means, that a slightly change of the initial value of the model or the desired end position does make the computed solution useless. We give a short overview to Reinforcement Learning and apply it to an optimal control problem containing the above mentioned challenges. We use an algorithm, which updates the weights and biases of a neural network, which takes the role of a controller, using simulated trajectories of the model generated by the current neural network.