
Finding Significant Features for Few-Shot Learning using Dimensionality Reduction Techniques
Author(s) -
Mauricio Mendez,
Gilberto Ochoa-Ruiz,
I. García,
Andres Méndez-Vázquez
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.52591/lxai202106213
Subject(s) - discriminative model , computer science , metric (unit) , dimensionality reduction , artificial intelligence , machine learning , similarity (geometry) , class (philosophy) , task (project management) , dimension (graph theory) , curse of dimensionality , feature vector , feature (linguistics) , exploit , pattern recognition (psychology) , data mining , mathematics , linguistics , operations management , philosophy , management , computer security , pure mathematics , economics , image (mathematics)
Few-shot learning is a fairly new technique that specialize in problems where we have little amount of data. The goal of this method is to classify categories that hasn’t been seen before with just a handful of samples. Recent approaches, such as metric learning, adopt the meta-learning setting in which we have episodic tasks conformed by support (training) data and query (test) data. Metric learning methods has demonstrated that simple models can achieve good performance, by learning a similarity function to compare the support and the query data. However, the feature space learned by the metric learning may not exploit the information given by a specific few-shot task. In this work, we explore the use of dimension reduction techniques as a way to find task-significant features. We measure the performance of the reduced features by giving a score based on the intra-class and inter-class distance, and select the method in which instances of different classes are distant and instances of the same class are close. This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.