z-logo
open-access-imgOpen Access
Pose-guided End-to-end Visual Navigation
Author(s) -
Cuiyun Fang,
Chaofan Zhang,
Fulin Tang,
Wang Fan,
Yihong Wu,
Yong Liu
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1873/1/012011
Subject(s) - computer science , artificial intelligence , flexibility (engineering) , computer vision , end to end principle , reinforcement learning , action (physics) , motion (physics) , rotation (mathematics) , robot , path (computing) , grid , mathematics , statistics , physics , quantum mechanics , programming language , geometry
End-to-end visual navigation based on deep reinforcement learning (DRL) has recently attracted much attention. For most existing navigation methods, a robot moves along only fixed directions (e.g., up, down, left and right) on a grid. Obviously, they are not flexible and efficient, which worsens the navigation performance (i.e., the distance of movement and times of rotation). To address this problem, we propose a novel pose-guided end-to-end visual navigation framework, which is flexible and efficient. In the pose-guided navigation framework, a robot can move along arbitrary directions, which are determined by poses between adjacent objects. Further, to select a proper motion and finally form an optimal path, we propose a DRL based action-selected strategy, where a dynamic action select space on the basis of deep siamese actor-critic network is developed. Besides, to validate the proposed method, we propose a novel pose-guided dataset. Experimental results demonstrate that the proposed method outperforms the state of the arts in both flexibility and efficiency.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here