z-logo
open-access-imgOpen Access
The promise of open survey questions—The validation of text-based job satisfaction measures
Author(s) -
Indy Wijngaards,
Martijn Burger,
Job van Exel
Publication year - 2019
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0226408
Subject(s) - measure (data warehouse) , computer science , discriminant validity , test (biology) , construct (python library) , job satisfaction , construct validity , convergence (economics) , open source , information retrieval , statistics , psychology , natural language processing , artificial intelligence , data science , social psychology , data mining , psychometrics , mathematics , paleontology , software , biology , economics , internal consistency , programming language , economic growth
Recent advances in computer-aided text analysis (CATA) have allowed organizational scientists to construct reliable and convenient measures from open texts. As yet, there is a lack of research into using CATA to analyze responses to open survey questions and constructing text-based measures of psychological constructs. In our study, we demonstrated the potential of CATA methods for the construction of text-based job satisfaction measures based on responses to a completely open and semi-open question. To do this, we employed three sentiment analysis techniques: Linguistic Inquiry and Word Count 2015, SentimentR and SentiStrength, and quantified the forms of measurement error they introduced: specific factor error, algorithm error and transient error. We conducted an initial test of the text-based measures’ validity, assessing their convergence with closed-question job satisfaction measures. We adopted a time-lagged survey design ( N wave 1 = 996; N wave 2 = 116) to test our hypotheses. In line with our hypotheses, we found that specific factor error is higher in the open question text-based measure than in the semi-open question text-based measure. As expected, algorithm error was substantial for both the open and semi-open question text-based measures. Transient error in the text-based measures was higher than expected, as it generally exceeded the transient error in the human-coded and the closed job satisfaction question measures. Our initial test of convergent and discriminant validity indicated that the semi-open question text-based measure is especially suitable for measuring job satisfaction. Our article ends with a discussion of limitations and an agenda for future research.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here