z-logo
Premium
Evaluating, Comparing, Monitoring, and Improving Representativeness of Survey Response Through R‐Indicators and Partial R‐Indicators
Author(s) -
Schouten Barry,
Bethlehem Jelke,
Beullens Koen,
Kleven Øyvin,
Loosveldt Geert,
Luiten Annemieke,
Rutar Katja,
Shlomo Natalie,
Skinner Chris
Publication year - 2012
Publication title -
international statistical review
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.051
H-Index - 54
eISSN - 1751-5823
pISSN - 0306-7734
DOI - 10.1111/j.1751-5823.2012.00189.x
Subject(s) - representativeness heuristic , computer science , quality (philosophy) , key (lock) , survey methodology , data collection , data science , risk analysis (engineering) , statistics , business , mathematics , computer security , philosophy , epistemology
Summary Non‐response is a common source of error in many surveys. Because surveys often are costly instruments, quality‐cost trade‐offs play a continuing role in the design and analysis of surveys. The advances of telephone, computers, and Internet all had and still have considerable impact on the design of surveys. Recently, a strong focus on methods for survey data collection monitoring and tailoring has emerged as a new paradigm to efficiently reduce non‐response error. Paradata and adaptive survey designs are key words in these new developments. Prerequisites to evaluating, comparing, monitoring, and improving quality of survey response are a conceptual framework for representative survey response, indicators to measure deviations thereof, and indicators to identify subpopulations that need increased effort. In this paper, we present an overview of representativeness indicators or R ‐indicators that are fit for these purposes. We give several examples and provide guidelines for their use in practice.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here