z-logo
Premium
Learning out of leaders
Author(s) -
Mougeot Mathilde,
Picard Dominique,
Tribouley Karine
Publication year - 2012
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2011.01024.x
Subject(s) - thresholding , minimax , consistency (knowledge bases) , curse of dimensionality , computer science , regression , dimensionality reduction , artificial intelligence , class (philosophy) , regression analysis , exponential function , linear regression , machine learning , mathematical optimization , mathematics , statistics , mathematical analysis , image (mathematics)
Summary.  The paper investigates the estimation problem in a regression‐type model. To be able to deal with potential high dimensions, we provide a procedure called LOL—for learning out of leaders—with no optimization step. LOL is an autodriven algorithm with two thresholding steps. A first adaptive thresholding helps to select leaders among the initial regressors to obtain a first reduction of dimensionality. Then a second thresholding is performed on the linear regression on the leaders. The consistency of the procedure is investigated. Exponential bounds are obtained, leading to minimax and adaptive results for a wide class of sparse parameters, with (quasi) no restriction on the number p of possible regressors. An extensive computational experiment is conducted to emphasize the practical good performances of LOL.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here