z-logo
Premium
QUADRATIC TSALLIS ENTROPY BIAS AND GENERALIZED MAXIMUM ENTROPY MODELS
Author(s) -
Hou Yuexian,
Wang Bo,
Song Dawei,
Cao Xiaochun,
Li Wenjie
Publication year - 2014
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.2012.00443.x
Subject(s) - mathematics , estimator , principle of maximum entropy , tsallis entropy , overfitting , maximum entropy probability distribution , kullback–leibler divergence , entropy (arrow of time) , joint entropy , bayesian probability , mathematical optimization , statistics , computer science , tsallis statistics , artificial intelligence , physics , quantum mechanics , artificial neural network
In density estimation task, Maximum Entropy (Maxent) model can effectively use reliable prior information via nonparametric constraints, that is, linear constraints without empirical parameters. However, reliable prior information is often insufficient, and parametric constraints becomes necessary but poses considerable implementation complexity. Improper setting of parametric constraints can result in overfitting or underfitting. To alleviate this problem, a generalization of Maxent, under Tsallis entropy framework, is proposed. The proposed method introduces a convex quadratic constraint for the correction of (expected) quadratic Tsallis Entropy Bias (TEB). Specifically, we demonstrate that the expected quadratic Tsallis entropy of sampling distributions is smaller than that of the underlying real distribution with regard to frequentist, Bayesian prior, and Bayesian posterior framework, respectively. This expected entropy reduction is exactly the (expected) TEB, which can be expressed by the closed‐form formula and acts as a consistent and unbiased correction with an appropriate convergence rate. TEB indicates that the entropy of a specific sampling distribution should be increased accordingly. This entails a quantitative reinterpretation of the Maxent principle. By compensating TEB and meanwhile forcing the resulting distribution to be close to the sampling distribution, our generalized quadratic Tsallis Entropy Bias Compensation (TEBC) Maxent can be expected to alleviate the overfitting and underfitting. We also present a connection between TEB and Lidstone estimator. As a result, TEB–Lidstone estimator is developed by analytically identifying the rate of probability correction in Lidstone. Extensive empirical evaluation shows promising performance of both TEBC Maxent and TEB‐Lidstone in comparison with various state‐of‐the‐art density estimation methods.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here