z-logo
Premium
Sparsity and smoothness via the fused lasso
Author(s) -
Tibshirani Robert,
Saunders Michael,
Rosset Saharon,
Zhu Ji,
Knight Keith
Publication year - 2005
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2005.00490.x
Subject(s) - lasso (programming language) , elastic net regularization , mathematics , smoothness , norm (philosophy) , least squares function approximation , hinge loss , classifier (uml) , generalization , regression , algorithm , mathematical optimization , computer science , pattern recognition (psychology) , artificial intelligence , statistics , support vector machine , mathematical analysis , estimator , world wide web , political science , law
Summary.  The lasso penalizes a least squares regression by the sum of the absolute values ( L 1 ‐norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L 1 ‐norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N , the sample size. The technique is also extended to the ‘hinge’ loss function that underlies the support vector classifier. We illustrate the methods on examples from protein mass spectroscopy and gene expression data.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here