z-logo
open-access-imgOpen Access
Treating Crowdsourcing as Examination: How to Score Tasks and Online Workers?
Author(s) -
Guangyang Han,
Sufang Li,
Runmin Wang,
Chunming Wu
Publication year - 2022
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5121/csit.2022.120701
Subject(s) - crowdsourcing , computer science , task (project management) , outsourcing , ground truth , machine learning , process (computing) , artificial intelligence , graph , task analysis , theoretical computer science , world wide web , political science , law , operating system , management , economics
Crowdsourcing is an online outsourcing mode which can solve the machine learning algorithm's urge need for massive labeled data. How to model the interaction between workers and tasks is a hot spot. we try to model workers as four types based on their ability and divide tasks into hard, medium and easy according difficulty. We believe that even experts struggle with difficult tasks while sloppy workers can get easy tasks right. So, good examination tasks should have moderate degree of difficulty and discriminability to score workers more objectively. Thus, we first score workers' ability mainly on the medium difficult tasks. A probability graph model is adopted to simulate the task execution process, and an iterative method is adopted to calculate and update the ground truth, the ability of workers and the difficulty of the task. We verify the effectiveness of our algorithm both in simulated and real crowdsourcing scenes.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here