
Deep Reinforcement Learning based Path Planning for Mobile Robot in Unknown Environment
Author(s) -
Yang Wang,
Yilin Fang,
Ping Lou,
Junwei Yan,
Nianyun Liu
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1576/1/012009
Subject(s) - reinforcement learning , mobile robot , computer science , motion planning , path (computing) , robot , convergence (economics) , plan (archaeology) , artificial intelligence , real time computing , computer network , archaeology , economics , history , economic growth
It is a trend for robots to replace human in industrial fields with the increment of labor cost. Mobile robots are widely used for executing tasks in harsh industrial environment. It is an important problem for mobile robots to plan their path in unknown environment. The ordinary deep Q-network (DQN) which is an efficient method of reinforcement learning has been used for mobile robot path planning in unknown environment, but the DQN generally has low convergence speed. This paper presents a method based on Double DQN (DDQN) with prioritized experience replay (PER) for mobile robot path planning in unknown environment. With sensing its surrounding local information, the mobile robot plans its path with this method in unknown environment. The experiment results show that the proposed method has higher convergence speed and success rate than the normal DQN method at the same experimental environment.