Semantic Apparatus – Self-coding: A method to assess semantic validity and bias when coding open-ended responses

Cited by Lee Sonogan

Learning to Code: Self Taught

Abstract by Rebecca A. Glazier, Amber E. Boydstun, Jessica T. Feezell

Open-ended survey questions can provide researchers with nuanced and rich data, but content analysis is subject to misinterpretation and can introduce bias into subsequent analysis. We present a simple method to improve the semantic validity of a codebook and test for bias: a “self-coding” method where respondents first provide open-ended responses and then self-code those responses into categories. We demonstrated this method by comparing respondents’ self-coding to researcher-based coding using an established codebook. Our analysis showed significant disagreement between the codebook’s assigned categorizations of responses and respondents’ self-codes. Moreover, this technique uncovered instances where researcher-based coding disproportionately misrepresented the views of certain demographic groups. We propose using the self-coding method to iteratively improve codebooks, identify bad-faith respondents, and, perhaps, to replace researcher-based content analysis.

Publication: Research & Politics (Peer-Reviewed Journal)

Pub Date: Jul 27, 2021 Doi:

Keywords: Survey design, survey methodology, open-ended questions, content analysis (Plenty more sections and references in this article)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.