Premium
Value in workplace‐based assessment rater training: psychometrics or edumetrics?
Author(s) -
Jelovsek J Eric
Publication year - 2015
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/medu.12763
Subject(s) - citation , library science , value (mathematics) , psychology , medical education , medicine , sociology , computer science , statistics , mathematics
Workplace-based assessment (WBA) remains an important method of evaluating competencies in medical training programmes. It also continues to be the most acceptable way to reassure the public that medical trainees achieve a minimum level of competence in the delivery of patient care. Unfortunately, one of the most consistent findings in measurements of clinical competence is the tremendous variability that occurs in the rating of trainee performance across a set of tasks. This is not unique to the health professions as inter-rater reliability has also been found to be a dominant source of variability in other areas of science, and in law and military activities. In this issue of Medical Education, Kogan et al. attribute the reliability issues found in many WBA tools to assessors who tend to ‘value different aspects of performance’, ‘lack a clear standard for judging performance’ or ‘rely on a gut or gestalt feeling’ and conclude that assessors do not ‘correctly’ apply assessment criteria. They set out to explore how two rater training interventions might provide explanations for the challenges of WBA in the hope of improving reliability. There is an ironic twist to their results. Having allowed faculty staff to create important assessment elements and to participate in the consensus building process required to improve rater reliability, Kogan et al. concluded that the biggest benefit may be the amount that faculty staff learned about the assessment process and their incorporation of this learning into useful tools.