z-logo
open-access-imgOpen Access
Self-coding: A method to assess semantic validity and bias when coding open-ended responses
Author(s) -
Rebecca A. Glazier,
Amber E. Boydstun,
Jessica T. Feezell
Publication year - 2021
Publication title -
research and politics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.232
H-Index - 20
ISSN - 2053-1680
DOI - 10.1177/20531680211031752
Subject(s) - codebook , coding (social sciences) , computer science , psychology , natural language processing , response bias , social psychology , statistics , artificial intelligence , mathematics
Open-ended survey questions can provide researchers with nuanced and rich data, but content analysis is subject to misinterpretation and can introduce bias into subsequent analysis. We present a simple method to improve the semantic validity of a codebook and test for bias: a “self-coding” method where respondents first provide open-ended responses and then self-code those responses into categories. We demonstrated this method by comparing respondents’ self-coding to researcher-based coding using an established codebook. Our analysis showed significant disagreement between the codebook’s assigned categorizations of responses and respondents’ self-codes. Moreover, this technique uncovered instances where researcher-based coding disproportionately misrepresented the views of certain demographic groups. We propose using the self-coding method to iteratively improve codebooks, identify bad-faith respondents, and, perhaps, to replace researcher-based content analysis.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here