Cited by Lee Sonogan

Abstract by Rebecca A. Glazier, Amber E. Boydstun, Jessica T. Feezell
Open-ended survey questions can provide researchers with nuanced and rich data, but content analysis is subject to misinterpretation and can introduce bias into subsequent analysis. We present a simple method to improve the semantic validity of a codebook and test for bias: a “self-coding” method where respondents first provide open-ended responses and then self-code those responses into categories. We demonstrated this method by comparing respondents’ self-coding to researcher-based coding using an established codebook. Our analysis showed significant disagreement between the codebook’s assigned categorizations of responses and respondents’ self-codes. Moreover, this technique uncovered instances where researcher-based coding disproportionately misrepresented the views of certain demographic groups. We propose using the self-coding method to iteratively improve codebooks, identify bad-faith respondents, and, perhaps, to replace researcher-based content analysis.
Publication: Research & Politics (Peer-Reviewed Journal)
Pub Date: Jul 27, 2021 Doi: https://doi.org/10.1177/20531680211031752
Keywords: Survey design, survey methodology, open-ended questions, content analysis
https://journals.sagepub.com/doi/full/10.1177/20531680211031752 (Plenty more sections and references in this article)
https://www.patreon.com/GROOVYGORDS
https://entertainmentcultureonline.com/