z-logo
Premium
Forecast evaluation of small nested model sets
Author(s) -
Hubrich Kirstin,
West Kenneth D.
Publication year - 2010
Publication title -
journal of applied econometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.878
H-Index - 99
eISSN - 1099-1255
pISSN - 0883-7252
DOI - 10.1002/jae.1176
Subject(s) - statistic , benchmark (surveying) , statistics , mean squared error , computer science , set (abstract data type) , econometrics , mathematics , geodesy , programming language , geography
We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require re‐estimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t ‐statistic, while the other computes a chi‐squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi‐squared statistic and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have the most accurate size, and the procedure that looks at the maximum t ‐statistic has the best power. We illustrate our procedures by comparing forecasts of different models for US inflation. Copyright © 2010 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here