Premium
Prediction Variance and Information Worth of Observations in Time Series
Author(s) -
Pourahmadi Mohsen,
Soofi E. S.
Publication year - 2000
Publication title -
journal of time series analysis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.576
H-Index - 54
eISSN - 1467-9892
pISSN - 0143-9782
DOI - 10.1111/1467-9892.00191
Subject(s) - mathematics , estimator , autoregressive model , statistics , entropy (arrow of time) , series (stratigraphy) , measure (data warehouse) , econometrics , variance (accounting) , gaussian , fisher information , upper and lower bounds , time series , bias of an estimator , minimum variance unbiased estimator , computer science , data mining , quantum mechanics , paleontology , mathematical analysis , physics , accounting , business , biology
The problem of developing measures of worth of observations in time series has not received much attention in the literature. Any meaningful measure of worth should naturally depend on the position of the observation as well as the objectives of the analysis, namely parameter estimation or prediction of future values. We introduce a measure that quantifies worth of a set of observations for the purpose of prediction of outcomes of stationary processes. The worth is measured as the change in the information content of the entire past due to exclusion or inclusion of a set of observations. The information content is quantified by the mutual information, which is the information theoretic measure of dependency. For Gaussian processes, the measure of worth turns out to be the relative change in the prediction error variance due to exclusion or inclusion of a set of observations. We provide formulae for computing predictive worth of a set of observations for Gaussian autoregressive moving‐average processs. For non‐Gaussian processes, however, a simple function of its entropy provides a lower bound for the variance of prediction error in the same manner that Fisher information provides a lower bound for the variance of an unbiased estimator via the Cramer‐Rao inequality. Statistical estimation of this lower bound requires estimation of the entropy of a stationary time series.