
Analysis of classification and Naïve Bayes algorithm k-nearest neighbor in data mining
Author(s) -
Lotar Mateus Sinaga,
. Sawaluddin,
Saib Suwilo
Publication year - 2020
Publication title -
iop conference series. materials science and engineering
Language(s) - English
Resource type - Journals
eISSN - 1757-899X
pISSN - 1757-8981
DOI - 10.1088/1757-899x/725/1/012106
Subject(s) - naive bayes classifier , bayes' theorem , k nearest neighbors algorithm , computer science , artificial intelligence , data mining , algorithm , probabilistic logic , data set , set (abstract data type) , machine learning , bayes error rate , simple (philosophy) , statistical classification , pattern recognition (psychology) , bayesian probability , bayes classifier , support vector machine , philosophy , epistemology , programming language
Naïve Bayes is a prediction method that contains a simple probabilistic that is based on the application of the Bayes theorem (Bayes rule) with the assumption that the dependence is strong. K-Nearest Neighbor (K-NN) is a group of instance-based learning, K-NN is also a lazy learning technique by searching groups of k objects in training data that are closest (similar) to objects on new data or testing data. Classification is a technique in Data mining to form a model from a predetermined data set. Data mining techniques are the choices that can be overcome in solving this problem. The results of the two different classification algorithms result in the discovery of better and more efficient algorithms for future use. It is recommended to use different datasets to analyze comparisons of naïve bayes and K-NN algorithms. the writer formulates the problem so that the research becomes more directed. The formulation of the problem in this study is to find the value of accuracy in the Naïve Bayes and KNN algorithms in classifying data.