z-logo
open-access-imgOpen Access
A RDA-Based Deep Reinforcement Learning Approach for Autonomous Motion Planning of UAV in Dynamic Unknown Environments
Author(s) -
Kaifang Wan,
Xiaoguang Gao,
Zijian Hu,
Wei Zhang
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1487/1/012006
Subject(s) - reinforcement learning , adaptability , planner , motion planning , scheme (mathematics) , computer science , motion (physics) , artificial intelligence , control (management) , real time computing , control engineering , engineering , robot , mathematics , ecology , mathematical analysis , biology
Autonomous motion planning (AMP) in dynamic unknown environments emerges as an urgent requirement with the prosperity of unmanned aerial vehicle (UAV). In this paper, we present a DRL-based planning framework to address the AMP problem, which is applicable in both military and civilian fields. To maintain learning efficiency, a novel reward difference amplifying (RDA) scheme is proposed to reshape the conventional reward functions and is introduced into state-of-the-art DRLs to constructs novel DRL algorithms for the planner’s learning. Different from conventional motion planning approaches, our DRL-based methods provide an end-to-end control for UAV, which directly maps the raw sensory measurements into high-level control signals. The training and testing experiments demonstrate that our RDA scheme makes great contributions to the performance improvement and provides the UAV good adaptability to dynamic environments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here