
Exploration-Exploitation Strategies in Deep Q-Networks Applied to Route-Finding Problems
Author(s) -
Pengyuan Wei
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1684/1/012073
Subject(s) - softmax function , reinforcement learning , computer science , simplicity , shortest path problem , class (philosophy) , path (computing) , value (mathematics) , mathematical optimization , artificial intelligence , work (physics) , machine learning , graph , mathematics , theoretical computer science , artificial neural network , engineering , mechanical engineering , philosophy , epistemology , programming language
Reinforcement learning is a class of algorithm that allows computers to learn how to accumulate rewards effectively in the environment and ultimately get excellent results. Among them, the exploration-exploitation tradeoff is a very important concept, since a good strategy can improve learning speed and final total reward. In this work, we applied DQN algorithm with different exploration-exploitation strategies to solve traditional route-finding problems. The experimental results show that the epsilon greedy strategy with a parabolic drop in epsilon value over reward improvement is the best, while it is not satisfactory after incorporating the softmax function. We hypothesized that the simplicity of the maze we use in this work in which the agent attempts to find the shortest path leads to the inadequacy of applying softmax to further encourage exploration. Future work thus involves experimenting with mazes at different scales and complexities and observing which exploration-exploitation strategies work best in each condition.