z-logo
Premium
Embedded experiments in repeated and overlapping surveys
Author(s) -
Chipperfield James,
Bell Philip
Publication year - 2010
Publication title -
journal of the royal statistical society: series a (statistics in society)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.103
H-Index - 84
eISSN - 1467-985X
pISSN - 0964-1998
DOI - 10.1111/j.1467-985x.2009.00602.x
Subject(s) - design of experiments , computer science , sample size determination , sample (material) , null hypothesis , repeated measures design , inference , randomized experiment , set (abstract data type) , statistical hypothesis testing , quasi experiment , exploit , data collection , point estimation , statistics , econometrics , interview , mathematics , artificial intelligence , chemistry , computer security , chromatography , political science , law , programming language , population , demography , sociology
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen‐and‐paper interviewing with computer‐assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here