Qualitative Coding: An Approach to Assess Inter-Rater Reliability
Author(s) -
Anne McAlister,
Dennis Lee,
Katherine Ehlert,
Rachel Kajfez,
Courtney Faber,
Marian Kennedy
Publication year - 2018
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--28777
Subject(s) - computer science , coding (social sciences) , software , software quality , software development , statistics , programming language , mathematics
When using qualitative coding techniques, establishing inter-rater reliability (IRR) is a recognized method of ensuring the trustworthiness of the study when multiple researchers are involved with coding. However, the process of manually determining IRR is not always fully explained within manuscripts or books. This is especially true if specialized qualitative coding software is being used since these software packages are often able to automatically calculate IRR providing little explanation on the methods used. Methods of coding without commercial software vary greatly including using non-specialized word processing or spreadsheet software and marking transcripts by hand using colored highlighters, pens, and even sticky notes. This array of coding approaches has led to a variety of techniques for calculating IRR. It is important that these techniques be shared, since IRR calculation is only automatic when using specialized coding software. This study summarizes a possible approach to establishing IRR for studies when researchers use word or spreadsheet processing software (e.g., Microsoft Word® and Excel®). Additionally, the authors provide their recommendations or “tricks of the trade” for future teams interested in calculating IRR between members of a coding team without specialized software.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom