z-logo
open-access-imgOpen Access
The structure of the local detector of the reprint model of the object in the image
Author(s) -
Anatoly Kulikov
Publication year - 2021
Publication title -
rossijskij tehnologičeskij žurnal/russian technological journal
Language(s) - English
Resource type - Journals
eISSN - 2782-3210
pISSN - 2500-316X
DOI - 10.32362/2500-316x-2021-9-5-7-13
Subject(s) - artificial intelligence , computer science , artificial neural network , object (grammar) , computer vision , representation (politics) , set (abstract data type) , pattern recognition (psychology) , identification (biology) , detector , transformation (genetics) , telecommunications , biochemistry , chemistry , botany , politics , political science , gene , law , biology , programming language
Currently, methods for recognizing objects in images work poorly and use intellectually unsatisfactory methods. The existing identification systems and methods do not completely solve the problem of identification, namely, identification in difficult conditions: interference, lighting, various changes on the face, etc. To solve these problems, a local detector for a reprint model of an object in an image was developed and described. A transforming autocoder (TA), a model of a neural network, was developed for the local detector. This neural network model is a subspecies of the general class of neural networks of reduced dimension. The local detector is able, in addition to determining the modified object, to determine the original shape of the object as well. A special feature of TA is the representation of image sections in a compact form and the evaluation of the parameters of the affine transformation. The transforming autocoder is a heterogeneous network (HS) consisting of a set of networks of smaller dimension. These networks are called capsules. Artificial neural networks should use local capsules that perform some rather complex internal calculations on their inputs, and then encapsulate the results of these calculations in a small vector of highly informative outputs. Each capsule learns to recognize an implicitly defined visual object in a limited area of viewing conditions and deformations. It outputs both the probability that the object is present in its limited area and a set of “instance parameters” that can include the exact pose, lighting, and deformation of the visual object relative to an implicitly defined canonical version of this object. The main advantage of capsules that output instance parameters is a simple way to recognize entire objects by recognizing their parts. The capsule can learn to display the pose of its visual object in a vector that is linearly related to the “natural” representations of the pose that are used in computer graphics. There is a simple and highly selective test for whether visual objects represented by two active capsules A and B have the correct spatial relationships for activating a higher-level capsule C. The transforming autoencoder solves the problem of identifying facial images in conditions of interference (noise), changes in illumination and angle.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here