Premium
Discussion of “Sequential Bayesian learning for stochastic volatility with variance‐gamma jumps in returns”
Author(s) -
Soyer Refik
Publication year - 2018
Publication title -
applied stochastic models in business and industry
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.413
H-Index - 40
eISSN - 1526-4025
pISSN - 1524-1904
DOI - 10.1002/asmb.2367
Subject(s) - george (robot) , citation , bayesian probability , stochastic volatility , volatility (finance) , operations research , library science , management , computer science , econometrics , mathematics , economics , artificial intelligence
I am very glad for the opportunity to discuss this interesting paper by Warty, Lopes, and Polson on sequential Bayesian estimation and inference for the flexible and parsimonious stochastic volatility with variance-gamma jump (SVVG) models for financial returns. They begin section 2 with the various components that go into the SVVG model, ie, a jump diffusion price process with gamma-subordinated Brownian motion jumps, a correlated Cox-Ingersoll-Ross (CIR) variance process, and a leveraged price process. The model is able to capture infinite activity jumps in the returns and is especially suited for markets with high liquidity and high activity. Estimation and inference is very computationally challenging, and this has been one of the main reasons why the SVVG model is not widely used in practice. They show how to adapt the sequential learning auxiliary particle filter of Carvalho et al (2010) for estimation. Their first step is to present a discretized SVVG model as a state space model through equations (4)-(7), with state vector xt = (vt, vt−1, Jt,Gt). Section 2 describes the prior specification. They use the usual, and when possible, conjugate priors, similar to those used in the well-cited Jacquier et al (1995), which guarantee a proper posterior. Section 4 is well written and explains the posterior estimation of the dynamic states and the static hyperparameters of the SVVG model. There is a considerable amount of detail split between this section and the appendix that enables a reader to follow the computational path. Section 5 shows results for the simulated data of length T = 5000 using M = 10, 000 particles. The results seem to be good, given the number of things going on. The authors have given a good discussion on issues such as underestimation of the latent variance when the true latent state attains very large values for short periods of time, inadequate learning of the jump parameters, etc. I have a few discussion points for the authors to consider. The first concerns prior specifications. The authors state that “for most parameters of SVVG, the likelihood overwhelms the contribution from the priors rather quickly for the sample sizes used in the simulation and empirical studies considered here. In limited testing on synthetic data, these prior choices often provide good results for fitting SVVG to individual asset returns.” They also state that “Priors for static parameters of the jump and time-change processes are informed by previous SVVG calibration studies where available.” Despite the long return series, I wonder how sensitive the results are to prior assumptions and how one may calibrate the prior assumptions to variations in stock characteristics such as liquidity? Did the authors find that their procedure was variably robust to prior assumptions on certain sets of parameters? My second point concerns computational complexity and run times. For example, in section 5, corresponding to M = 10, 000 particles, it will be useful to know how long this approach takes. Also, how would optimizing on the number of particles help with the two main issues they raised in the paper with regards to the estimation? Third, it would be interesting to see future work by these authors as they address some of the open problems in the SVVG modeling that they have mentioned. It would also be of interest to see how the SVVG model can be adapted to make inferences for high-frequency intra-day financial returns on a large set of stocks, perhaps after biclustering (Liu et al, 2018)1 to obtain homogeneous submatrices.