z-logo
Premium
Opening the black box of clinical skills assessment via observation: a conceptual model
Author(s) -
Kogan Jennifer R,
Conforti Lisa,
Bernabeo Elizabeth,
Iobst William,
Holmboe Eric
Publication year - 2011
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/j.1365-2923.2011.04025.x
Subject(s) - medical education , psychology , grounded theory , inference , conceptual model , reliability (semiconductor) , conceptual framework , medline , applied psychology , qualitative research , medicine , computer science , social science , power (physics) , philosophy , physics , epistemology , quantum mechanics , database , artificial intelligence , sociology , political science , law
Medical Education 2011: 45 : 1048–1060 Objectives  This study was intended to develop a conceptual framework of the factors impacting on faculty members’ judgements and ratings of resident doctors (residents) after direct observation with patients. Methods  In 2009, 44 general internal medicine faculty members responsible for out‐patient resident teaching in 16 internal medicine residency programmes in a large urban area in the eastern USA watched four videotaped scenarios and two live scenarios of standardised residents engaged in clinical encounters with standardised patients. After each, faculty members rated the resident using a mini‐clinical evaluation exercise and were individually interviewed using a semi‐structured interview. Interviews were videotaped, transcribed and analysed using grounded theory methods. Results  Four primary themes that provide insights into the variability of faculty assessments of residents’ performance were identified: (i) the frames of reference used by faculty members when translating observations into judgements and ratings are variable; (ii) high levels of inference are used during the direct observation process; (iii) the methods by which judgements are synthesised into numerical ratings are variable, and (iv) factors external to resident performance influence ratings. From these themes, a conceptual model was developed to describe the process of observation, interpretation, synthesis and rating. Conclusions  It is likely that multiple factors account for the variability in faculty ratings of residents. Understanding these factors informs potential new approaches to faculty development to improve the accuracy, reliability and utility of clinical skills assessment.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here