z-logo
Premium
Progress testing: is there a role for the OSCE?
Author(s) -
Pugh Debra,
Touchie Claire,
Wood Timothy J,
HumphreyMurto Susan
Publication year - 2014
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/medu.12423
Subject(s) - psychology , medline , educational measurement , medical education , medicine , political science , curriculum , pedagogy , law
Context The shift from a time‐based to a competency‐based framework in medical education has created a need for frequent formative assessments. Many educational programmes use some form of written progress test to identify areas of strength and weakness and to promote continuous improvement in their learners. However, the role of performance‐based assessments, such as objective structured clinical examinations ( OSCE s), in progress testing remains unclear. Objective The aims of this paper are to describe the use of an OSCE to assess learners at different stages of training, describe a structure for reporting scores, and provide evidence for the psychometric properties of different rating tools. Methods A 10‐station OSCE was administered to internal medicine residents in postgraduate years ( PGY s) 1–4. Candidates were assessed using a checklist ( CL ), a global rating scale ( GRS ) and a training level rating scale ( TLRS ). Reliability was calculated for each measure using C ronbach's alpha. Differences in performance by year of training were explored using analysis of variance ( anova ). Correlations between scores obtained using the different rating instruments were calculated. Results Sixty‐nine residents participated in the OSCE . Inter‐station reliability was greater (0.88) using the TLRS compared with the CL (0.84) and GRS (0.79). Using all three rating instruments, scores varied significantly by year of training (p < 0.001). Scores from the different rating instruments were highly correlated: CL and GRS , r  = 0.93; CL and TLRS , r  = 0.90, and GRS and TLRS , r  = 0.94 (p < 0.001). Candidates received feedback on their performance relative to examiner expectations for their PGY level. Conclusions Scores were found to have high reliability and demonstrated significant differences in performance by year of training. This provides evidence for the validity of using scores achieved on an OSCE as markers of progress in learners at different levels of training. Future studies will focus on assessing individual progress on the OSCE over time.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here