z-logo
open-access-imgOpen Access
Local Crowdsourcing for Annotating Audio: the Elevator Annotator platform
Author(s) -
Themistoklis Karavellas,
Anggarda Prameswari,
Oana Inel,
Victor de Boer
Publication year - 2019
Publication title -
human computation
Language(s) - English
Resource type - Journals
ISSN - 2330-8001
DOI - 10.15346/hc.v6i1.100
Subject(s) - crowdsourcing , annotation , computer science , elevator , set (abstract data type) , modalities , human–computer interaction , world wide web , artificial intelligence , engineering , structural engineering , social science , sociology , programming language
Crowdsourcing and other human computation techniques have proven useful in collecting large numbers of annotations for various datasets. In the majority of cases, online platforms are used when running crowdsourcing campaigns. Local crowdsourcing is a variant where annotation is done on specific physical locations. This paper describes a local crowdsourcing concept, platform and experiment. The case setting concerns eliciting annotations for an audio archive. For the experiment, we developed a hardware platform designed to be deployed in building elevators. To evaluate the effectiveness of the platform and to test the influence of location on the annotation results, an experiment was set up in two different locations. In each location two different user interaction modalities are used. The results show that our simple local crowdsourcing setup is able to achieve acceptable accuracy levels with up to 4 annotations per hour, and that the location has a significant effect on accuracy.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here