z-logo
open-access-imgOpen Access
Path planning using deep reinforcement learning based on potential field in complex environment
Author(s) -
Qingxuan Jia,
Maonan Yang,
Miao Yu,
Xulong Li
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1748/2/022016
Subject(s) - reinforcement learning , potential field , computer science , motion planning , field (mathematics) , path (computing) , artificial intelligence , plan (archaeology) , reinforcement , robot , mathematical optimization , engineering , mathematics , geography , archaeology , geophysics , pure mathematics , programming language , geology , structural engineering
This paper introduces a deep reinforcement learning path planning method based on potential field for complex environment. Based on the potential field model in the artificial potential field method, we define states, actions, rewards in reinforcement learning, and use Deep Deterministic Policy Gradient (DDPG) reinforcement learning algorithm for optimization. By training robots in the environment, our method can effectively plan the path in a complex environment with massive obstacles, and avoid trapping in the local minimum region of the potential field.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here