Premium
Rubrics for designing and evaluating online asynchronous discussions
Author(s) -
Penny Lana,
Murphy Elizabeth
Publication year - 2009
Publication title -
british journal of educational technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 95
eISSN - 1467-8535
pISSN - 0007-1013
DOI - 10.1111/j.1467-8535.2008.00895.x
Subject(s) - rubric , asynchronous communication , nonprobability sampling , computer science , the internet , psychology , asynchronous learning , selection (genetic algorithm) , mathematics education , artificial intelligence , world wide web , teaching method , medicine , cooperative learning , environmental health , computer network , population , synchronous learning
The purpose of the study reported on in this paper was to identify performance criteria and ratings in rubrics designed for the evaluation of learning in online asynchronous discussions (OADs) in post‐secondary contexts. We analysed rubrics collected from Internet sources. Using purposive sampling, we reached saturation with the selection of 50 rubrics. Using keyword analysis and subsequent grouping of keywords into categories, we identified 153 performance criteria in 19 categories and 831 ratings in 40 categories. We subsequently identified four core categories as follows: cognitive (44.0%), mechanical (19.0%), procedural/managerial (18.29%) and interactive (17.17%). Another 1.52% of ratings and performance criteria were labelled vague and not assigned to any core category.