Premium
An overview of reciprocal L 1 ‐regularization for high dimensional regression data
Author(s) -
Song Qifan
Publication year - 2017
Publication title -
wiley interdisciplinary reviews: computational statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.693
H-Index - 38
eISSN - 1939-0068
pISSN - 1939-5108
DOI - 10.1002/wics.1416
Subject(s) - penalty method , regularization (linguistics) , lasso (programming language) , computer science , reciprocal , model selection , data sharing , mathematical optimization , mathematics , artificial intelligence , medicine , linguistics , philosophy , alternative medicine , pathology , world wide web
High dimensional data plays a key role in the modern statistical analysis. A common objective for the high dimensional data analysis is to perform model selection, and penalized likelihood method is one of the most popular approaches. Typical penalty functions are usually symmetric about 0, continuous and nondecreasing in (0, ∞). In this review article, we will focus on a special type of penalty function, the so call reciprocal Lasso ( rLasso ) penalty. The rLasso penalty functions are decreasing in (0, ∞), discontinuous at 0, and converge to infinity when the coefficients approach zero. Although uncommon, this choice of penalty is intuitively appealing if one seeks a parsimonious model fitting. In this article, we will provide an overview for the motivation, theory, and computational challenges of this rLasso penalty, and we will also compare the theoretical properties and empirical performance of rLasso with other popular penalty choices. WIREs Comput Stat 2018, 10:e1416. doi: 10.1002/wics.1416 This article is categorized under: Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods