z-logo
open-access-imgOpen Access
Design of Experiment to Optimize the Architecture of Deep Learning for Nonlinear Time Series Forecasting
Author(s) -
Suhartono Suhartono,
Novri Suhermi,
Dedy Dwi Prastyo
Publication year - 2018
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2018.10.528
Subject(s) - computer science , series (stratigraphy) , nonlinear system , architecture , artificial intelligence , time series , deep learning , machine learning , industrial engineering , paleontology , physics , quantum mechanics , engineering , biology , art , visual arts
The neural architecture is very substantial in order to construct a neural network model that produce a minimum error. Several factors among others include the input choice, the number of hidden layers, the series length, and the activation function. In this paper we present a design of experiment in order to optimize the neural network model. We conduct a simulation study by modeling the data generated from a nonlinear time series model, called subset 3 exponential smoothing transition auto-regressive (ESTAR ([3]). We explore a deep learning model, called deep feedforward network and we compare it to the single hidden layer feedforward neural network. Our experiment resulted in that the input choice is the most important factor in order to improve the forecast performance as well as the deep learning model is the promising approach for forecasting task.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom