
Research on Self-Supervised Comparative Learning for Computer Vision
Author(s) -
Yuanyuan Liu,
Qianqian Liu
Publication year - 2021
Publication title -
journal of electronic research and application
Language(s) - English
Resource type - Journals
eISSN - 2208-3510
pISSN - 2208-3502
DOI - 10.26689/jera.v5i3.2320
Subject(s) - computer science , artificial intelligence , machine learning , semi supervised learning , supervised learning , unsupervised learning , generative grammar , pipeline (software) , artificial neural network , programming language
In recent years, self-supervised learning which does not require a large number of manual labels generate supervised signals through the data itself to attain the characterization learning of samples. Self-supervised learning solves the problem of learning semantic features from unlabeled data, and realizes pre-training of models in large data sets. Its significant advantages have been extensively studied by scholars in recent years. There are usually three types of self-supervised learning: “Generative, Contrastive, and Generative-Contrastive.” The model of the comparative learning method is relatively simple, and the performance of the current downstream task is comparable to that of the supervised learning method. Therefore, we propose a conceptual analysis framework: data augmentation pipeline, architectures, pretext tasks, comparison methods, semi-supervised fine-tuning. Based on this conceptual framework, we qualitatively analyze the existing comparative self-supervised learning methods for computer vision, and then further analyze its performance at different stages, and finally summarize the research status of self-supervised comparative learning methods in other fields.