
How Reliable and Valid are the Evaluations of Digital Competence in Higher Education: A Systematic Mapping Study
Author(s) -
Rafael Saltos-Rivas,
Pavel Novoa-Hernández,
Rocío Serrano Rodriǵuez
Publication year - 2022
Publication title -
sage open
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.357
H-Index - 32
ISSN - 2158-2440
DOI - 10.1177/21582440211068492
Subject(s) - competence (human resources) , psychology , validity , applied psychology , systematic review , medical education , data science , medline , social psychology , computer science , medicine , psychometrics , clinical psychology , political science , law
Evaluating digital competencies has become a topic of growing interest in recent years. Although several reviews and studies have summarized the main elements of progress and shortcomings in this area, some issues are yet to be explored. Very little information is available about the ways of ensuring the validity and reliability of the instrument used. This study addresses this issue through systematic mapping covering the period from January 2015 to July 2020. Based on 88 primary studies, we conclude that a growing number of studies have emerged over the years; most of them are based on European university students in social science programs; the quality of the journals where they were published is low; and the instruments used are mostly questionnaires and ad-hoc surveys that measure the various dimensions of digital competence based on participants’ perceptions. Out of the eight possible types of quality assessment, studies commonly report only four (one in the case of reliability and three in the case of validity). More than 50% of the studies do not provide evidence of having assessed the reliability and validity at the same time. Although participant discipline resulted in a significantly association with the practice of reporting reliability and validity assessments, no specific dimension explained such association. A non-parametric multivariate analysis reveals, among other interesting patterns, that the practice of not reporting quality assessments is more present in studies where participants belong to the fields of Engineering and Technology, Health Sciences, and Humanities.