Premium
Applying Dynamic Training‐Subset Selection Methods Using Genetic Programming for Forecasting Implied Volatility
Author(s) -
Hamida Sana Ben,
Abdelmalek Wafa,
Abid Fathi
Publication year - 2016
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/coin.12057
Subject(s) - genetic programming , computer science , selection (genetic algorithm) , volatility (finance) , machine learning , feature selection , artificial intelligence , data mining , mathematics , econometrics
Volatility is a key variable in option pricing, trading, and hedging strategies. The purpose of this article is to improve the accuracy of forecasting implied volatility using an extension of genetic programming (GP) by means of dynamic training‐subset selection methods. These methods manipulate the training data in order to improve the out‐of‐sample patterns fitting. When applied with the static subset selection method using a single training data sample, GP could generate forecasting models, which are not adapted to some out‐of‐sample fitness cases. In order to improve the predictive accuracy of generated GP patterns, dynamic subset selection methods are introduced to the GP algorithm allowing a regular change of the training sample during evolution. Four dynamic training‐subset selection methods are proposed based on random, sequential, or adaptive subset selection. The latest approach uses an adaptive subset weight measuring the sample difficulty according to the fitness cases' errors. Using real data from S&P500 index options, these techniques are compared with the static subset selection method. Based on mean squared error total and percentage of non‐fitted observations, results show that the dynamic approach improves the forecasting performance of the generated GP models, especially those obtained from the adaptive‐random training‐subset selection method applied to the whole set of training samples.