
Nearly Monotone Neural Approximation with Quadratic Activation Function
Author(s) -
Hawraa Abbas Almurieb,
Eman Samir Bhaya
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1804/1/012098
Subject(s) - monotone polygon , mathematics , quadratic equation , monotonic function , smoothness , modulus of continuity , interval (graph theory) , activation function , function (biology) , integrable system , artificial neural network , mathematical analysis , discrete mathematics , pure mathematics , combinatorics , computer science , type (biology) , ecology , geometry , machine learning , evolutionary biology , biology
Quadratic functions give good rates of approximation when used as activation functions of feedforward neural networks. Also, monotonicity is important to describe the function behavior, so the behavior of its constrained approximation. Previously, the degree of approximation by feedforward neural networks with quadratic activation function is proved to be within no less than the second order modulus of smoothness. In this paper, we discuss whether the improvement of the above estimates for Lebesgue integrable functions is possible or not. By nearly monotone approximation, it is possible to talk about a higher order modulus of smoothness, while it is not for just monotone functions. We get a nearly monotone function approximation by splitting the interval [0,1] into a partition with infinitely small lengths and then excluding intervals near the endpoints of the partition’s subintervals. However, counter examples cut hope for any more improvement outside that restricted interval. All the results are proved in the Lp-space with p < 1.