Premium
Generalisability theory analyses of concept mapping assessment scores in a problem‐based medical curriculum
Author(s) -
Kassab Salah E,
Fida Mariam,
Radwan Ahmed,
Hassan Adla B,
AbuHijleh Marwan,
O'Connor Brian P
Publication year - 2016
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/medu.13054
Subject(s) - variance (accounting) , context (archaeology) , concept map , dependability , reliability (semiconductor) , psychology , curriculum , construct (python library) , mathematics education , statistics , computer science , mathematics , pedagogy , geography , programming language , power (physics) , physics , accounting , archaeology , software engineering , quantum mechanics , business
Context In problem‐based learning ( PBL ), students construct concept maps that integrate different concepts related to the PBL case and are guided by the learning needs generated in small‐group tutorials. Although an instrument to measure students’ concept maps in PBL programmes has been developed, the psychometric properties of this instrument have not yet been assessed. Objectives This study evaluated the generalisability of and sources of variance in medical students’ concept map assessment scores in a PBL context. Methods Medical students (Year 4, n = 116) were asked to construct three integrated concept maps in which the content domain of each map was to be focused on a PBL clinical case. Concept maps were independently evaluated by four raters based on five criteria: valid selection of concepts; hierarchical arrangement of concepts; degree of integration; relationship to the context of the problem, and degree of student creativity. Generalisability theory was used to compute the reliability of the concept map scores. Results The dependability coefficient, which indicates the reliability of scores across the measured facets for making absolute decisions, was 0.814. Students’ concept map scores (universe scores) accounted for the largest proportion of total variance (47%) across all score comparisons. Rater differences accounted for 10% of total variance, and the student × rater interaction accounted for 25% of total variance. The variance attributable to differences in the content domain of the maps was negligible (2%). The remaining 16% of the variance reflected unexplained sources of error. Results from the D study suggested that a dependability level of 0.80 can be achieved by using three raters who each score two concept map domains, or by using five raters who each score only one concept map domain. Conclusions This study demonstrated that concept mapping assessment scores of medical students in PBL have high reliability. Results suggested that greater improvements in dependability might be made by increasing the number of raters rather than by increasing the number of concept map domains.