z-logo
open-access-imgOpen Access
Overview of Configuring Adaptive Activation Functions for Deep Neural Networks - A Comparative Study
Author(s) -
Wang Haoxiang,
S. Smys
Publication year - 2021
Publication title -
journal of ubiquitous computing and communication technologies
Language(s) - English
Resource type - Journals
ISSN - 2582-337X
DOI - 10.36548/jucct.2021.1.002
Subject(s) - activation function , computer science , artificial neural network , artificial intelligence , deep learning , robustness (evolution) , error function , process (computing) , machine learning , algorithm , biochemistry , chemistry , gene , operating system
Recently, the deep neural networks (DNN) have demonstrated many performances in the pattern recognition paradigm. The research studies on DNN include depth layer networks, filters, training and testing datasets. Deep neural network is providing many solutions for nonlinear partial differential equations (PDE). This research article comprises of many activation functions for each neuron. Besides, these activation networks are allowing many neurons within the neuron networks. In this network, the multitude of the functions will be selected between node by node to minimize the classification error. This is the reason for selecting the adaptive activation function for deep neural networks. Therefore, the activation functions are adapted with every neuron on the network, which is used to reduce the classification error during the process. This research article discusses the scaling factor for activation function that provides better optimization for the process in the dynamic changes of procedure. The proposed adaptive activation function has better learning capability than fixed activation function in any neural network. The research articles compare the convergence rate, early training function, and accuracy between existing methods. Besides, this research work provides improvements in debt ideas of the learning process of various neural networks. This learning process works and tests the solution available in the domain of various frequency bands. In addition to that, both forward and inverse problems of the parameters in the overriding equation will be identified. The proposed method is very simple architecture and efficiency, robustness, and accuracy will be high when considering the nonlinear function. The overall classification performance will be improved in the resulting networks, which have been trained with common datasets. The proposed work is compared with the recent findings in neuroscience research and proved better performance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here