Premium
Combined Methodology for Linear Time Series Forecasting
Author(s) -
Moraes Muniz da Silva Ricardo,
Kugler Mauricio,
Umezaki Taizo
Publication year - 2020
Publication title -
ieej transactions on electrical and electronic engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.254
H-Index - 30
eISSN - 1931-4981
pISSN - 1931-4973
DOI - 10.1002/tee.23252
Subject(s) - autoregressive fractionally integrated moving average , autoregressive integrated moving average , dependency (uml) , computer science , series (stratigraphy) , autoregressive model , time series , process (computing) , moving average , preprocessor , data mining , machine learning , artificial intelligence , econometrics , long memory , mathematics , volatility (finance) , paleontology , computer vision , biology , operating system
Time series forecasting is an important type of quantitative model used to predict future values given a series of past observations for which the generation process is unknown. Two of the most well‐known methods for the modeling of linear time series are the autoregressive integrated moving average (ARIMA) and the autoregressive fractionally integrated moving average (ARFIMA). For different datasets, the number of past observations necessary for an accurate prediction may vary. Short and long memory dependency problems require different handling, with the ARIMA model being limited to the first, while the ARFIMA model was specifically developed for the latter. Preprocessing techniques and modification on specific components of these models are common approaches used to tackle the memory dependency problem in order to improve their accuracy. However, such solutions are specific to certain datasets. This paper proposes a new method that combines the short and long memory characteristics of the two aforementioned models in order to keep a low accumulative error in several different scenarios. Twelve public time series datasets were used to compare the performance of the proposed method with the original models. The results were also compared with two alternative methods from the literature used to deal with datasets of different memory dependencies. The new approach presented a lower error for the majority of the experiments, failing only for the datasets that contain a large number of features. © 2020 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.