
Sample Bias Effect on Meta-Learning
Author(s) -
Mariane Reis,
Ana Carolina Lorena
Publication year - 2020
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5753/eniac.2020.12137
Subject(s) - meta learning (computer science) , computer science , sample (material) , machine learning , meta analysis , context (archaeology) , artificial intelligence , sample size determination , sampling bias , data science , statistics , mathematics , task (project management) , medicine , paleontology , chemistry , chromatography , biology , management , economics
Sample bias is a common issue on traditional machine learning studies but rarely considered when discussing meta-learning. It happens when the training data sample lacks or overemphasizes one or more characteristics, compared to others. Herewith, models trained on such data may become inaccurate for some instances. This work aims to analyze this issue in the meta-learning context. Indeed, in most of the meta-learning literature, a random sample of datasets is taken for building meta-models. Nonetheless, there is no discussion over a possible side-effect bias not controlled in such random sampling. This work aims to analyze these effects, in order not only to discuss their consequences, but also to start a debate over the need of their consideration in meta-learning research.