z-logo
open-access-imgOpen Access
Nearly Exponential Neural Networks Approximation in Lp Spaces
Author(s) -
Eman Samir Bhaya,
Zahraa Mahmoud Fadel
Publication year - 2017
Publication title -
maǧallaẗ ǧāmiʿaẗ bābil/maǧallaẗ ǧāmiʻaẗ bābil
Language(s) - English
Resource type - Journals
eISSN - 2312-8135
pISSN - 1992-0652
DOI - 10.29196/jub.v26i1.359
Subject(s) - artificial neural network , euclidean geometry , exponential function , function approximation , regular polygon , sigmoid function , function (biology) , mathematics , minimax approximation algorithm , convex function , approximation algorithm , computer science , compact space , stochastic neural network , discrete mathematics , algorithm , artificial intelligence , pure mathematics , time delay neural network , mathematical analysis , geometry , evolutionary biology , biology
In different applications, we can widely use the neural network approximation. They are being applied to solve many problems in computer science, engineering, physics, etc. The reason for successful application of neural network approximation is the neural network ability to approximate arbitrary function. In the last 30 years, many papers have been published showing that we can approximate any continuous function defined on a compact subset of the Euclidean spaces of dimensions greater than 1, uniformly using a neural network with one hidden layer. Here we prove that any real function in L_P (C) defined on a compact and convex subset  of can be approximated by a sigmoidal neural network with one hidden layer, that we call nearly exponential approximation.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here