z-logo
open-access-imgOpen Access
A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification
Author(s) -
Yu Chen,
Chirag Rastogi,
William R. Norris
Publication year - 2021
Publication title -
ieee robotics and automation letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.123
H-Index - 56
ISSN - 2377-3766
DOI - 10.1109/lra.2021.3101866
Subject(s) - robotics and control systems , computing and processing , components, circuits, devices and systems
The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this letter proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here