z-logo
Premium
Competency‐based education calls for programmatic assessment: But what does this look like in practice?
Author(s) -
Rich Jessica V.,
Fostaty Young Sue,
Donnelly Catherine,
Hall Andrew K.,
Dag J. Damon,
Weersink Kristen,
Caudle Jaelyn,
Van Melle Elaine,
Klinger Don A.
Publication year - 2020
Publication title -
journal of evaluation in clinical practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.737
H-Index - 73
eISSN - 1365-2753
pISSN - 1356-1294
DOI - 10.1111/jep.13328
Subject(s) - summative assessment , formative assessment , operationalization , competence (human resources) , medical education , documentation , qualitative property , psychology , medicine , pedagogy , computer science , social psychology , philosophy , epistemology , machine learning , programming language
Rationale, aims, and objectives Programmatic assessment has been identified as a system‐oriented approach to achieving the multiple purposes for assessment within Competency‐Based Medical Education (CBME, i.e., formative, summative, and program improvement). While there are well‐established principles for designing and evaluating programs of assessment, few studies illustrate and critically interpret, what a system of programmatic assessment looks like in practice. This study aims to use systems thinking and the ‘two communities’ metaphor to interpret a model of programmatic assessment and to identify challenges and opportunities with operationalization. Method An interpretive case study was used to investigate how programmatic assessment is being operationalized within one competency‐based residency program at a Canadian university. Qualitative data were collected from residents, faculty, and program leadership via semi‐structured group and individual interviews conducted at nine months post‐CBME implementation. Data were analyzed using a combination of data‐based inductive analysis and theory‐derived deductive analysis. Results In this model, Academic Advisors had a central role in brokering assessment data between communities responsible for producing and using residents' performance information for decision making (i.e., formative, summative/evaluative, and program improvement). As system intermediaries, Academic Advisors were in a privileged position to see how the parts of the assessment system contributed to the functioning of the whole and could identify which system components were not functioning as intended. Challenges were identified with the documentation of residents' performance information (i.e., system inputs); use of low‐stakes formative assessments to inform high‐stakes evaluative judgments about the achievement of competence standards; and gaps in feedback mechanisms for closing learning loops. Conclusions The findings of this research suggest that program stakeholders can benefit from a systems perspective regarding how their assessment practices contribute to the efficacy of the system as a whole. Academic Advisors are well positioned to support educational development efforts focused on overcoming challenges with operationalizing programmatic assessment.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here