z-logo
Premium
Fixed b subsampling and the block bootstrap: improved confidence sets based on p ‐value calibration
Author(s) -
Shao Xiaofeng,
Politis Dimitris N.
Publication year - 2013
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2012.01037.x
Subject(s) - resampling , mathematics , heteroscedasticity , inference , bandwidth (computing) , sampling distribution , sample size determination , statistics , range (aeronautics) , algorithm , confidence interval , computer science , artificial intelligence , composite material , computer network , materials science
Summary.  Subsampling and block‐based bootstrap methods have been used in a wide range of inference problems for time series. To accommodate the dependence, these resampling methods involve a bandwidth parameter, such as the subsampling window width and block size in the block‐based bootstrap. In empirical work, using different bandwidth parameters could lead to different inference results, but traditional first‐order asymptotic theory does not capture the choice of the bandwidth. We propose to adopt the fixed b approach, as advocated by Kiefer and Vogelsang in the heteroscedasticity–auto‐correlation robust testing context, to account for the influence of the bandwidth on inference. Under the fixed b asymptotic framework, we derive the asymptotic null distribution of the p ‐values for subsampling and the moving block bootstrap, and further propose a calibration of the traditional small‐ b ‐based confidence intervals (regions or bands) and tests. Our treatment is fairly general as it includes both finite dimensional parameters and infinite dimensional parameters, such as the marginal distribution function. Simulation results show that the fixed b approach is more accurate than the traditional small b approach in terms of approximating the finite sample distribution, and that the calibrated confidence sets tend to have smaller coverage errors than the uncalibrated counterparts.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here