z-logo
Premium
The next‐generation K ‐means algorithm
Author(s) -
Demidenko Eugene
Publication year - 2018
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.11379
Subject(s) - cluster analysis , computer science , algorithm , hierarchical clustering , data mining , mathematics , statistics , artificial intelligence
Typically, when referring to a model‐based classification, the mixture distribution approach is understood. In contrast, we revive the hard‐classification model‐based approach developed by Banfield and Raftery (1993) for which K ‐means is equivalent to the maximum likelihood (ML) estimation. The next‐generation K ‐means algorithm does not end after the classification is achieved, but moves forward to answer the following fundamental questions: Are there clusters, how many clusters are there, what are the statistical properties of the estimated means and index sets, what is the distribution of the coefficients in the clusterwise regression, and how to classify multilevel data? The statistical model‐based approach for the K ‐means algorithm is the key, because it allows statistical simulations and studying the properties of classification following the track of the classical statistics. This paper illustrates the application of the ML classification to testing the no‐clusters hypothesis, to studying various methods for selection of the number of clusters using simulations, robust clustering using Laplace distribution, studying properties of the coefficients in clusterwise regression, and finally to multilevel data by marrying the variance components model with K ‐means.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here