Premium
A flexible procedure for mixture proportion estimation in positive‐unlabeled learning
Author(s) -
Lin Zhenfeng,
Long James P.
Publication year - 2020
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.11447
Subject(s) - estimator , classifier (uml) , artificial intelligence , vc dimension , mathematics , machine learning , pattern recognition (psychology) , generalization , computer science , statistics , mathematical analysis
Abstract Positive‐unlabeled (PU) learning considers two samples, a positive set P with observations from only one class and an unlabeled set U with observations from two classes. The goal is to classify observations in U . Class mixture proportion estimation (MPE) in U is a key step in PU learning. Blanchard et al. showed that MPE in PU learning is a generalization of the problem of estimating the proportion of true null hypotheses in multiple testing problems. Motivated by this idea, we propose reducing the problem to one‐dimension via construction of a probabilistic classifier trained on the P and U data sets followed by application of a one‐dimensional mixture proportion method from the multiple testing literature to the observation class probabilities. The flexibility of this framework lies in the freedom to choose the classifier and the one‐dimensional MPE method. We prove consistency of two mixture proportion estimators using bounds from empirical process theory, develop tuning parameter free implementations, and demonstrate that they have competitive performance on simulated waveform data and a protein signaling problem.