Premium
Is critical thinking happening? Testing content analysis schemes applied to MOOC discussion forums
Author(s) -
O'Riordan Tim,
Millard David E.,
Schulz John
Publication year - 2021
Publication title -
computer applications in engineering education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.478
H-Index - 29
eISSN - 1099-0542
pISSN - 1061-3773
DOI - 10.1002/cae.22314
Subject(s) - online discussion , computer science , content analysis , coding (social sciences) , massive open online course , scale (ratio) , test (biology) , collaborative learning , data science , world wide web , knowledge management , paleontology , social science , statistics , physics , mathematics , quantum mechanics , sociology , biology
Learners’ progress within computer‐supported collaborative learning environments is typically measured via analysis and interpretation of quantitative web interaction measures. However, the usefulness of these “proxies for learning” is questioned as they do not necessarily reflect critical thinking—an essential component of collaborative learning. Research indicates that pedagogical content analysis schemes have value in measuring critical discourse in small scale, formal, online learning environments, but research using these methods on high volume, informal, Massive Open Online Course (MOOC) forums is less common. The challenge in this setting is to develop valid and reliable indicators that operate successfully at scale. In this study, we test two established coding schemes used for the pedagogical content analysis of online discussions in a large‐scale review of MOOC comment data. Pedagogical Scores are derived from manual ratings applied to comments by raters and correlated with automatically derived linguistic and interaction indicators. Results show that the content analysis methods are reliable, and are very strongly correlated with each other, suggesting that their specific format is not significant in this setting. In addition, the methods are strongly associated with the relevant linguistic indicators of higher levels of learning and have weaker correlations with other linguistic and interaction metrics. This suggests promise for further research using Machine Learning techniques, with the goal of providing realistic feedback to instructors, learners, and learning designers.