
Optimization of computational complexity of an artificial neural network
Author(s) -
Nikolay A. Vershkov,
Viktor Kuchukov,
Наталья Николаевна Кучукова,
Nikolay Nikolaevich Kucherov,
Egor Shiriaev
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.47350/iccs-de.2021.17
Subject(s) - computer science , artificial neural network , computational complexity theory , artificial intelligence , nonlinear system , transformer , classifier (uml) , time delay neural network , transformation (genetics) , stochastic neural network , kernel (algebra) , machine learning , algorithm , pattern recognition (psychology) , mathematics , engineering , biochemistry , chemistry , physics , quantum mechanics , combinatorics , gene , voltage , electrical engineering
The article deals with the modelling of Artificial Neural Networks as an information transmission system to optimize their computational complexity. The analysis of existing theoretical approaches to optimizing the structure and training of neural networks is carried out. In the process of constructing the model, the well-known problem of isolating a deterministic signal on the background of noise and adapting it to solving the problem of assigning an input implementation to a certain cluster is considered. A layer of neurons is considered as an information transformer with a kernel for solving a certain class of problems: orthogonal transformation, matched filtering, and nonlinear transformation for recognizing the input implementation with a given accuracy. Based on the analysis of the proposed model, it is concluded that it is possible to reduce the number of neurons in the layers of neural network and to reduce the number of features for training the classifier.