z-logo
open-access-imgOpen Access
Large-scale Agent Data Partitioning Based on DensityRepel-K_medoids
Author(s) -
Lingjuan Wu,
Jie Liang,
Bo Li
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1284/1/012046
Subject(s) - medoid , cluster analysis , computer science , algorithm , k medoids , data mining , cluster (spacecraft) , scale (ratio) , data set , cure data clustering algorithm , correlation clustering , artificial intelligence , quantum mechanics , programming language , physics
Large-scale Agent data partitioning is the premise of parallel distributed computing in the process of ABMS (Agent-based Modeling and Simulation) . Based on the distance-based K_medoids clustering algorithm, this paper proposes an improved K_medoids algorithm (DensityRepel-K_medoids), which is implemented by high-performance programming language X10 and applied to large-scale Agent data partitioning simulation based on distance interaction. The DensityRepel-Kmedoids algorithm first determines the density value and the repulsion value of each Agent data in the Agent data set, and secondly pre-selects the cluster center according to the density value and the repulsion value of each Agent data, and finally uses the pre-selected cluster centers as the initial cluster centers for continuous iterative clustering until convergence. The algorithm avoids the defects of K_means clustering algorithm sensitive to outliers, and avoids the shortcomings of K_medoids clustering algorithm for large-scale data processing. By comparing and analyzing the simulation experiments of Agent data sets of different scales, the algorithm presents a better performance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here