On Bayesian Clustering with a Structured Gaussian Mixture
Author(s) -
Keisuke Yamazaki
Publication year - 2014
Publication title -
journal of advanced computational intelligence and intelligent informatics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.172
H-Index - 20
eISSN - 1343-0130
pISSN - 1883-8014
DOI - 10.20965/jaciii.2014.p1007
Subject(s) - computer science , cluster analysis , mixture model , unobservable , artificial intelligence , pattern recognition (psychology) , gaussian , bayesian probability , constraint (computer aided design) , covariance matrix , hierarchical clustering , bayes' theorem , probabilistic logic , data mining , machine learning , algorithm , mathematics , econometrics , physics , geometry , quantum mechanics
Cluster analysis is commonly used in the fields of computational intelligence and pattern recognition. The task is to detect the unobservable labels that show to which clusters the observable data belong. A Gaussian mixture is a representative hierarchical model that is often used when taking a probabilistic approach to this task. Although it is widely used, the statistical properties of cluster analysis have not yet been clarified. The present paper analyzes the theory of Bayesian clustering for the case when the number of clusters is unknown and the variance-covariance matrix of the Gaussian distribution has a constraint. We refer to this constraint as the structure of the components. The result of this analysis shows that, even if the estimation method does not take account of the structure, the Bayes method provides an effective, tractable, and efficient algorithm. Based on an experiment with simulated data, we confirmed the advantages of the Bayes method over the expectationmaximization (EM) method.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom