z-logo
Premium
Laplace Error Penalty‐based Variable Selection in High Dimension
Author(s) -
Wen Canhong,
Wang Xueqin,
Wang Shaoli
Publication year - 2015
Publication title -
scandinavian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.359
H-Index - 65
eISSN - 1467-9469
pISSN - 0303-6898
DOI - 10.1111/sjos.12130
Subject(s) - mathematics , penalty method , piecewise , estimator , differentiable function , dimension (graph theory) , feature selection , variable (mathematics) , mathematical optimization , convex optimization , function (biology) , convex function , regular polygon , statistics , mathematical analysis , combinatorics , computer science , artificial intelligence , geometry , evolutionary biology , biology
We propose the Laplace Error Penalty (LEP) function for variable selection in high‐dimensional regression. Unlike penalty functions using piecewise splines construction, the LEP is constructed as an exponential function with two tuning parameters and is infinitely differentiable everywhere except at the origin. With this construction, the LEP‐based procedure acquires extra flexibility in variable selection, admits a unified derivative formula in optimization and is able to approximate the L 0 penalty as close as possible. We show that the LEP procedure can identify relevant predictors in exponentially high‐dimensional regression with normal errors. We also establish the oracle property for the LEP estimator. Although not being convex, the LEP yields a convex penalized least squares function under mild conditions if p is no greater than n . A coordinate descent majorization‐minimization algorithm is introduced to implement the LEP procedure. In simulations and a real data analysis, the LEP methodology performs favorably among competitive procedures.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here