z-logo
open-access-imgOpen Access
Enhanced Error Reduction of Signal Power Loss During Electromagnetic Propagation: Architectural Composition and Learning Rate Selection
Author(s) -
Virginia Chika Ebhota,
Viranjay M. Srivastava
Publication year - 2021
Publication title -
journal of communications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.185
H-Index - 35
eISSN - 2374-4367
pISSN - 1796-2021
DOI - 10.12720/jcm.16.10.450-456
Subject(s) - multilayer perceptron , perceptron , artificial neural network , computer science , backpropagation , mean squared error , artificial intelligence , correlation coefficient , network architecture , standard deviation , pattern recognition (psychology) , supervised learning , pearson product moment correlation coefficient , signal (programming language) , machine learning , statistics , mathematics , computer network , programming language
This research work analyses the effect of the architectural composition of Multi-Layer Perceptron (MLP) Artificial Neural Network (ANN) combined with the effect of the learning rate for effective prediction of signal power loss during electromagnetic signal propagation. A single hidden layer and two hidden layers of MLP ANN have been considered. Different configurations of the neural network architecture ranging from 4 to 100 for both MLP networks have been analyzed. The required hidden layer neurons for optimal training of a single layer multi-layer network were 40 neurons with 0.99670 coefficient of correlation and 1.28020 standard deviations, while [68 72] trained two hidden layers multi-layer perceptron with 0.98880 coefficient of correlation and standard deviation of 1.42820. Different learning rates were also adopted for the network training. The results further validate better MLP neural network training for signal power loss prediction using single-layer perceptron network compared to two hidden layers perceptron network with the coefficient of correlation of 0.99670 for single-layer network and 0.9888 for two hidden layers network. Furthermore, the learning rate of 0.003 shows the best training capability with lower mean squared error and higher training regression compared to other values of learning rate used for both single layer and two hidden layers perceptron MLP networks.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here