z-logo
open-access-imgOpen Access
Reinforcement Learning Guided by Double Replay Memory
Author(s) -
JiSeong Han,
Kichun Jo,
Wontaek Lim,
Yonghak Lee,
Kyoungmin Ko,
Eunseon Sim,
Junsang Cho,
Sunghwan Kim
Publication year - 2021
Publication title -
journal of sensors
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.399
H-Index - 43
eISSN - 1687-7268
pISSN - 1687-725X
DOI - 10.1155/2021/6652042
Subject(s) - reinforcement learning , computer science , reinforcement , reuse , action (physics) , artificial intelligence , human–computer interaction , psychology , social psychology , ecology , physics , quantum mechanics , biology
Experience replay memory in reinforcement learning enables agents to remember and reuse past experiences. Most of the reinforcement models are subject to single experience replay memory to operate agents. In this article, we propose a framework that accommodates doubly used experience replay memory, exploiting both important transitions and new transitions simultaneously. In numerical studies, the deep Q -networks (DQN) equipped with double experience replay memory are examined under various scenarios. A self-driving car requires an automated agent to figure out when to adequately change lanes on the real-time basis. To this end, we apply our proposed agent to the simulation of urban mobility (SUMO) experiments. Besides, we also verify its applicability to reinforcement learning whose action space is discrete (e.g., computer game environments). Taken all together, we conclude that the proposed framework outperforms priorly known reinforcement learning models in the virtue of double experience replay memory.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom