z-logo
open-access-imgOpen Access
Quality Assessment for Crowdsourced Object Annotations
Author(s) -
Sirion Vittayakorn,
James Hays
Publication year - 2011
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.25.109
Subject(s) - annotation , computer science , crowdsourcing , automatic image annotation , object (grammar) , grading (engineering) , quality (philosophy) , information retrieval , artificial intelligence , task (project management) , machine learning , image retrieval , image (mathematics) , world wide web , philosophy , epistemology , civil engineering , management , engineering , economics
As computer vision datasets grow larger the community is increasingly relying on crowdsourced annotations to train and test our algorithms. Due to the heterogeneous and unpredictable capability of online annotators, various strategies have been proposed to “clean” crowdsourced annotations. However, these strategies typically involve getting more annotations, perhaps different types of annotations (e.g. a grading task), rather than computationally assessing the annotation or image content. In this paper we propose and evaluate several strategies for automatically estimating the quality of a spatial object annotation. We show that one can significantly outperform simple baselines, such as that used by LabelMe, by combining multiple image-based annotation assessment strategies.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom