From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains
Author(s) -
Baochen Sun,
Kate Saenko
Publication year - 2014
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.28.82
Subject(s) - computer science , object detection , benchmark (surveying) , virtual reality , artificial intelligence , adaptation (eye) , object (grammar) , computer vision , minimum bounding box , domain (mathematical analysis) , bounding overwatch , virtual image , machine learning , image (mathematics) , pattern recognition (psychology) , physics , optics , mathematical analysis , mathematics , geodesy , geography
The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom