z-logo
Premium
Efficient selection of hyperparameters in large Bayesian VARs using automatic differentiation
Author(s) -
Chan Joshua C. C.,
Jacobi Liana,
Zhu Dan
Publication year - 2020
Publication title -
journal of forecasting
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.543
H-Index - 59
eISSN - 1099-131X
pISSN - 0277-6693
DOI - 10.1002/for.2660
Subject(s) - hyperparameter , hyperparameter optimization , computer science , prior probability , bayesian probability , conjugate prior , bayesian optimization , set (abstract data type) , selection (genetic algorithm) , machine learning , grid , data mining , artificial intelligence , mathematical optimization , support vector machine , mathematics , programming language , geometry
Large Bayesian vector autoregressions with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data‐driven manner can often substantially improve forecast performance. We propose a computationally efficient method to obtain the optimal hyperparameters based on automatic differentiation, which is an efficient way to compute derivatives. Using a large US data set, we show that using the optimal hyperparameter values leads to substantially better forecast performance. Moreover, the proposed method is much faster than the conventional grid‐search approach, and is applicable in high‐dimensional optimization problems. The new method thus provides a practical and systematic way to develop better shrinkage priors for forecasting in a data‐rich environment.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here