z-logo
Premium
Approximation error of Fourier neural networks
Author(s) -
Zhumekenov Abylay,
Takhanov Rustem,
Castro Alejandro J.,
Assylbekov Zhenisbek
Publication year - 2021
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.11506
Subject(s) - fourier series , artificial neural network , feedforward neural network , computer science , fourier transform , convergence (economics) , activation function , algorithm , focus (optics) , function approximation , mathematics , function (biology) , rate of convergence , space (punctuation) , feed forward , approximation error , artificial intelligence , mathematical analysis , physics , channel (broadcasting) , telecommunications , engineering , control engineering , evolutionary biology , optics , economics , biology , economic growth , operating system
Abstract The paper investigates approximation error of two‐layer feedforward Fourier Neural Networks (FNNs). Such networks are motivated by the approximation properties of Fourier series. Several implementations of FNNs were proposed since 1980s: by Gallant and White, Silvescu, Tan, Zuo and Cai, and Liu. The main focus of our work is Silvescu's FNN, because its activation function does not fit into the category of networks, where the linearly transformed input is exposed to activation. The latter ones were extensively described by Hornik. In regard to non‐trivial Silvescu's FNN, its convergence rate is proven to be of order O (1/ n ) . The paper continues investigating classes of functions approximated by Silvescu FNN, which appeared to be from Schwartz space and space of positive definite functions.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here