z-logo
Premium
Canonical partial least squares and continuum power regression
Author(s) -
Jong Sijmen de,
Wise Barry M.,
Ricker N. Lawrence
Publication year - 2001
Publication title -
journal of chemometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.47
H-Index - 92
eISSN - 1099-128X
pISSN - 0886-9383
DOI - 10.1002/1099-128x(200102)15:2<85::aid-cem601>3.0.co;2-9
Subject(s) - multicollinearity , mathematics , singular value decomposition , condition number , principal component regression , computation , partial least squares regression , singular value , power series , principal component analysis , least squares function approximation , regression analysis , total least squares , statistics , algorithm , mathematical analysis , estimator , eigenvalues and eigenvectors , physics , quantum mechanics
Abstract A method, canonical PLS, is proposed for performing the basic PLS calculations in the canonical co‐ordinate system of the predictor matrix X . This reduces the size of the problem to its smallest possible dimension as determined by the rank of X . The computation is further simplified since the cross‐product matrices X T X and XX T are symmetric. PLS weights, scores and loadings referring to the canonical co‐ordinate system can be easily back‐transformed to the original co‐ordinate system. The method offers an ideal setting to carry out the continuum regression approach to PLS introduced by Wise and Ricker. By raising the singular values to some power γ, one may artificially decrease (γ < 1) or increase (γ > 1) the degree of multicollinearity in the X data. One may investigate a series of models by considering various values of the power γ. This offers a means to push the model into the direction of ordinary least squares (γ = 0) or principal components regression (γ→∞), with PLS regression as an intermediate case (γ = 1). Since all these computations are mainly performed in canonical space, obtained after one singular value decomposition, a considerable gain in speed is achieved. This is demonstrated over a wide range of data set sizes (number of X and Y variables, number of samples) and model parameters (number of latent variables and number of powers considered). The gains in computational efficiency (as measured by the ratio of the number of floating point operations required relative to the original algorithm) range from a factor of 3·9 to over 100. Copyright © 2000 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here