
Unsupervised SIFT features-to-Image Translation using CycleGAN
Author(s) -
Sławomir Maćkowiak,
Patryk Brudz,
Mikołaj Ciesielski,
Maciej Wawrzyniak
Publication year - 2021
Publication title -
computer science research notes
Language(s) - English
Resource type - Conference proceedings
eISSN - 2464-4625
pISSN - 2464-4617
DOI - 10.24132/csrn.2021.3002.24
Subject(s) - computer science , scale invariant feature transform , artificial intelligence , image translation , pattern recognition (psychology) , coding (social sciences) , feature (linguistics) , set (abstract data type) , feature extraction , representation (politics) , computer vision , image (mathematics) , mathematics , linguistics , statistics , philosophy , politics , political science , law , programming language
The generation of video content from a small set of data representing the features of objects has very promising application prospects. This is particularly important in the context of the work of the MPEG Video Coding for Machine group, where various efforts are being undertaken related to efficient image coding for machines and humans. The representation of feature points well understood by machines in a video form, which is easy to understand by humans, is an important current challenge. This paper presents results on the ability to generate images from a set of SIFT feature points without descriptors using the generative adversarial network CycleGAN. The impact of the SIFT keypoint representation method on the learning quality of the network is presented. The results and a subjective evaluation of the generated images are presented.