Premium
Utilizing content moderators to investigate critical factors for assessing the quality of answers on brainly, social learning Q&A platform for students: A pilot study
Author(s) -
Choi Erik,
Borkowski Michal,
Zakoian Julien,
Sagan Katie,
Scholla Kent,
Ponti Crystal,
Labedz Michal,
Bielski Maciek
Publication year - 2015
Publication title -
proceedings of the association for information science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.193
H-Index - 14
ISSN - 2373-9231
DOI - 10.1002/pra2.2015.145052010069
Subject(s) - helpfulness , relevance (law) , quality (philosophy) , context (archaeology) , vagueness , psychology , content (measure theory) , computer science , ambiguity , information retrieval , social psychology , artificial intelligence , mathematics , epistemology , paleontology , philosophy , political science , law , biology , fuzzy logic , mathematical analysis , programming language
In this paper, we present data findings from the pilot study focusing on utilizing content moderators from Brainly, a social learning Q&A platform, to assess the quality of answers. Because it can be argued that Brainly users who actively moderate contents may have better contextual understandings of how users interact with each other through question‐answering activities, and which answers are more likely relevant and appropriate to a question in a context of Brainly. The findings indicate that helpfulness, informativeness, and relevance are the most critical factors that have impacts on the quality of answers. Further content analysis also identified two new criteria : 1) descriptiveness – evaluating how well answers provide descriptive summaries through detailed and additional information, and 2) explicitness – clearly constructing answers to reduce vagueness of what information answerers intend to provide to satisfy an asker's need.