z-logo
open-access-imgOpen Access
Model Reference Adaptive Control for Online Policy Adaptation and Network Synchronization
Author(s) -
Miguel Arevalo-Castiblanco,
César A. Uribe,
Eduardo MojicaNava
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.52591/202107242
Subject(s) - synchronization (alternating current) , computer science , reference model , reinforcement learning , adaptation (eye) , control (management) , adaptive control , control theory (sociology) , distributed computing , artificial intelligence , computer network , channel (broadcasting) , physics , software engineering , optics
We propose an online adaptive synchronization method for leader-follower networks of heterogeneous agents. Synchronization is achieved using a distributed Model Reference Adaptive Control (DMRAC-RL) that enables the improved performance of Reinforcement Learning (RL)-trained policies on a reference model. The leader observes the performance of the reference model, and the followers observe the states and actions of the agents they are connected to, but not the reference model. Notably, both the leader and followers models might differ from the reference model the RL-control policy was trained. DMRAC-RL uses an internal loop that adjusts the learned policy for the agents in the form of an augmented input to solve the distributed control problem. Numerical examples of the synchronization of a network of inverted pendulums support our theoretical findings.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here