z-logo
Premium
Statistical analysis of NIR data: data pretreatment
Author(s) -
Sun Jianguo
Publication year - 1997
Publication title -
journal of chemometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.47
H-Index - 92
eISSN - 1099-128X
pISSN - 0886-9383
DOI - 10.1002/(sici)1099-128x(199711/12)11:6<525::aid-cem489>3.0.co;2-g
Subject(s) - collinearity , calibration , principal component analysis , data set , curse of dimensionality , statistics , sample size determination , near infrared spectroscopy , statistical model , regression analysis , computer science , linear regression , pattern recognition (psychology) , mathematics , artificial intelligence , physics , quantum mechanics
In the statistical analysis of near‐infrared (NIR) data arising from the calibration of NIR instruments, two steps are often involved. The first one is data pretreatment, which usually refers to transformation of NIR spectra (e.g. the samples of predictor variables using statistical regression terminology) with the goal of reducing large baseline variations, dimensionality, collinearity and/or noise level of the observed spectra. The pretreatment is needed partly because measured spectra usually have large baseline variation and/or substantial noise and have a low ratio of the sample size to the number of predictor variables. The second step is calibration modeling and involves the application of statistical regression methods to the pretreated NIR data. This paper deals with the data pretreatment step and in particular, a method based on principal component analysis is presented for attacking the problem of large baseline variation. The usefulness of the described method is illustrated through a simulation study and its application to the analysis of a set of real NIR data. © 1997 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here