Premium
The use of qualitative research criteria for portfolio assessment as an alternative to reliability evaluation: a case study
Author(s) -
Driessen E,
Van Der Vleuten C,
Schuwirth L,
Van Tartwijk J,
Vermunt J
Publication year - 2005
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/j.1365-2929.2004.02059.x
Subject(s) - dependability , credibility , audit , portfolio , reliability (semiconductor) , inter rater reliability , qualitative research , psychology , computer science , applied psychology , accounting , business , political science , rating scale , developmental psychology , power (physics) , physics , social science , software engineering , finance , quantum mechanics , sociology , law
Aim Because it deals with qualitative information, portfolio assessment inevitably involves some degree of subjectivity. The use of stricter assessment criteria or more structured and prescribed content would improve interrater reliability, but would obliterate the essence of portfolio assessment in terms of flexibility, personal orientation and authenticity. We resolved this dilemma by using qualitative research criteria as opposed to reliability in the evaluation of portfolio assessment. Methodology/research design Five qualitative research strategies were used to achieve credibility and dependability of assessment: triangulation, prolonged engagement, member checking, audit trail and dependability audit. Mentors read portfolios at least twice during the year, providing feedback and guidance (prolonged engagement). Their recommendation for the end‐of‐year grade was discussed with the student (member checking) and submitted to a member of the portfolio committee. Information from different sources was combined (triangulation). Portfolios causing persistent disagreement were submitted to the full portfolio assessment committee. Quality assurance procedures with external auditors were used (dependability audit) and the assessment process was thoroughly documented (audit trail). Results A total of 233 portfolios were assessed. Students and mentors disagreed on 7 (3%) portfolios and 9 portfolios were submitted to the full committee. The final decision on 29 (12%) portfolios differed from the mentor's recommendation. Conclusion We think we have devised an assessment procedure that safeguards the characteristics of portfolio assessment, with credibility and dependability of assessment built into the judgement procedure. Further support for credibility and dependability might be sought by means of a study involving different assessment committees.