z-logo
open-access-imgOpen Access
Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination
Author(s) -
Michael J. Peeters,
M. Kenneth Cor,
Sarah E. Petite,
Michelle N. Schroeder
Publication year - 2021
Publication title -
innovations in pharmacy
Language(s) - English
Resource type - Journals
ISSN - 2155-0417
DOI - 10.24926/iip.v12i1.2110
Subject(s) - generalizability theory , objective structured clinical examination , pharmacy , reliability (semiconductor) , test (biology) , psychology , univariate , medical education , multivariate statistics , medicine , statistics , mathematics , family medicine , paleontology , physics , quantum mechanics , biology , developmental psychology , power (physics)
Objectives: Performance-based assessments, including objective structured clinical examinations (OSCEs), are essential learning assessments within pharmacy education. Because important educational decisions can follow from performance-based assessment results, pharmacy colleges/schools should demonstrate acceptable rigor in validation of their learning assessments. Though G-Theory has rarely been reported in pharmacy education, it would behoove pharmacy educators to, using G-Theory, produce evidence demonstrating reliability as a part of their OSCE validation process. This investigation demonstrates the use of G-Theory to describes reliability for an OSCE, as well as to show methods for enhancement of the OSCE’s reliability.Innovation: To evaluate practice-readiness in the semester before final-year rotations, third-year PharmD students took an OSCE. This OSCE included 14 stations over three weeks. Each week had four or five stations; one or two stations were scored by faculty-raters while three stations required students’ written responses. All stations were scored 1-4. For G-Theory analyses, we used G_Strings and then mGENOVA. Critical Analysis: Ninety-seven students completed the OSCE; stations were scored independently. First, univariate G-Theory design of students crossed with stations nested in weeks (p x s:w) was used. The total-score g-coefficient (reliability) for this OSCE was 0.72. Variance components for test parameters were identified. Of note, students accounted for only some OSCE score variation. Second, a multivariate G-Theory design of students crossed with stations (p· x s°) was used. This further analysis revealed which week(s) were weakest for the reliability of test-scores from this learning assessment. Moreover, decision-studies showed how reliability could change depending on the number of stations each week. For a g-coefficient >0.80, seven stations per week were needed. Additionally, targets for improvements were identified.Implications: In test validation, evidence of reliability is vital for the inference of generalization; G-Theory provided this for our OSCE. Results indicated that the reliability of scores was mediocre and could be improved with more stations. Revision of problematic stations could help reliability as well. Within this need for more stations, one practical insight was to administer those stations over multiple weeks/occasions (instead of all stations in one occasion).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here