z-logo
Premium
Fast Projection‐Based Methods for the Least Squares Nonnegative Matrix Approximation Problem
Author(s) -
Kim Dongmin,
Sra Suvrit,
Dhillon Inderjit S.
Publication year - 2008
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.104
Subject(s) - computer science , conjugate gradient method , algorithm , broyden–fletcher–goldfarb–shanno algorithm , gradient descent , convergence (economics) , least squares function approximation , regularization (linguistics) , non negative matrix factorization , mathematical optimization , matrix (chemical analysis) , line search , matrix decomposition , mathematics , artificial intelligence , artificial neural network , materials science , asynchronous communication , economic growth , computer network , quantum mechanics , statistics , eigenvalues and eigenvectors , physics , computer security , radius , estimator , composite material , economics
Nonnegative matrix approximation (NNMA) is a popular matrix decomposition technique that has proven to be useful across a diverse variety of fields with applications ranging from document analysis and image processing to bioinformatics and signal processing. Over the years, several algorithms for NNMA have been proposed, e.g. Lee and Seung's multiplicative updates, alternating least squares (ALS), and gradient descent‐based procedures. However, most of these procedures suffer from either slow convergence, numerical instability, or at worst, serious theoretical drawbacks. In this paper, we develop a new and improved algorithmic framework for the least‐squares NNMA problem, which is not only theoretically well‐founded, but also overcomes many deficiencies of other methods. Our framework readily admits powerful optimization techniques and as concrete realizations we present implementations based on the Newton, BFGS and conjugate gradient methods. Our algorithms provide numerical results superior to both Lee and Seung's method as well as to the alternating least squares heuristic, which was reported to work well in some situations but has no theoretical guarantees1. Our approach extends naturally to include regularization and box‐constraints without sacrificing convergence guarantees. We present experimental results on both synthetic and real‐world datasets that demonstrate the superiority of our methods, both in terms of better approximations as well as computational efficiency. Copyright © 2007 Wiley Periodicals, Inc., A Wiley Company Statistical Analy Data Mining 1: 000‐000, 2007

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here