Feedback Credibility in a Formative Postgraduate Objective Structured Clinical Examination: Effects of Examiner Type
Author(s) -
Lynfa Stroud,
Matthew Sibbald,
Denyse Richardson,
Heather McDonald-Blumer,
Rodrigo B. Cavalcanti
Publication year - 2018
Publication title -
journal of graduate medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.541
H-Index - 24
eISSN - 1949-8349
pISSN - 1949-8357
DOI - 10.4300/jgme-d-17-00578.1
Subject(s) - credibility , formative assessment , odds , specialty , medicine , objective structured clinical examination , odds ratio , medical education , family medicine , psychology , pedagogy , logistic regression , political science , law
Background Resident perspectives on feedback are key determinants of its acceptance and effectiveness, and provider credibility is a critical element in perspective formation. It is unclear what factors influence a resident's judgment of feedback credibility. Objective We examined how residents perceive the credibility of feedback providers during a formative objective structured clinical examination (OSCE) in 2 ways: (1) ratings of faculty examiners compared with standardized patient (SP) examiners, and (2) ratings of faculty examiners based on alignment of expertise and station content. Methods During a formative OSCE, internal medicine residents were randomized to receive immediate feedback from either faculty examiners or SP examiners on communication stations, and at least 1 specialty congruent and either 1 specialty incongruent or general internist faculty examiner for clinical stations. Residents rated perceived credibility of feedback providers on a 7-point scale. Results were analyzed with proportional odds models for ordinal credibility ratings. Results A total of 192 of 203 residents (95%), 72 faculty, and 10 SPs participated. For communication stations, odds of high credibility ratings were significantly lower for SP than for faculty examiners (odds ratio [OR] = 0.28, P < .001). For clinical stations, credibility odds were lower for specialty incongruent faculty (OR = 0.19, P < .001) and female faculty (OR = 0.45, P < .001). Conclusions Faculty examiners were perceived as being more credible than SP examiners, despite standardizing feedback delivery. Specialty incongruency with station content and female sex were associated with lower credibility ratings for faculty examiners.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom