
Unsupervised SIFT features-to-Image Translation using CycleGAN
Author(s) -
Sławomir Maćkowiak,
Patryk Brudz,
Mikołaj Ciesielsk,
M. Wawrzyniak
Publication year - 2021
Publication title -
computer science research notes
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.11
H-Index - 4
eISSN - 2464-4625
pISSN - 2464-4617
DOI - 10.24132/csrn.2021.3101.24
Subject(s) - scale invariant feature transform , computer science , artificial intelligence , image translation , coding (social sciences) , pattern recognition (psychology) , feature (linguistics) , set (abstract data type) , representation (politics) , feature extraction , computer vision , image (mathematics) , mathematics , linguistics , statistics , philosophy , politics , political science , law , programming language
The generation of video content from a small set of data representing the features of objects has very promising application prospects. This is particularly important in the context of the work of the MPEG Video Coding for Machine group, where various efforts are being undertaken related to efficient image coding for machines and humans. The representation of feature points well understood by machines in a video form, which is easy to understand by humans, is an important current challenge. This paper presents results on the ability to generate images from a set of SIFT feature points without descriptors using the generative adversarial network CycleGAN. The impact of the SIFT keypoint representation method on the learning quality of the network is presented. The results and a subjective evaluation of the generated images are presented.