Research Library

open-access-imgOpen AccessTowards an Adaptable and Generalizable Optimization Engine in Decision and Control: A Meta Reinforcement Learning Approach
Author(s)
Sungwook Yang,
Chaoying Pei,
Ran Dai,
Chuangchuang Sun
Publication year2024
Sampling-based model predictive control (MPC) has found significant successin optimal control problems with non-smooth system dynamics and cost function.Many machine learning-based works proposed to improve MPC by a) learning orfine-tuning the dynamics/ cost function, or b) learning to optimize for theupdate of the MPC controllers. For the latter, imitation learning-basedoptimizers are trained to update the MPC controller by mimicking the expertdemonstrations, which, however, are expensive or even unavailable. Moresignificantly, many sequential decision-making problems are in non-stationaryenvironments, requiring that an optimizer should be adaptable and generalizableto update the MPC controller for solving different tasks. To address thoseissues, we propose to learn an optimizer based on meta-reinforcement learning(RL) to update the controllers. This optimizer does not need expertdemonstration and can enable fast adaptation (e.g., few-shots) when it isdeployed in unseen control tasks. Experimental results validate theeffectiveness of the learned optimizer regarding fast adaptation.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here