z-logo
open-access-imgOpen Access
INCREASING ACCURACY OF K-NEAREST NEIGHBOR CLASSIFIER FOR TEXT CLASSIFICATION
Author(s) -
Falguni N. Patel,
Neha Soni
Publication year - 2014
Publication title -
international journal of computer science and informatics
Language(s) - English
Resource type - Journals
ISSN - 2231-5292
DOI - 10.47893/ijcsi.2014.1183
Subject(s) - k nearest neighbors algorithm , classifier (uml) , pattern recognition (psychology) , computer science , artificial intelligence , nearest neighbor chain algorithm , trigonometric functions , inverse , voting , majority rule , best bin first , large margin nearest neighbor , weighted voting , data mining , mathematics , geometry , canopy clustering algorithm , correlation clustering , cluster analysis , politics , political science , law
k Nearest Neighbor Rule is a well-known technique for text classification. The reason behind this is its simplicity, effectiveness, easily modifiable. In this paper, we briefly discuss text classification, k-NN algorithm and analyse the sensitivity problem of k value. To overcome this problem, we introduced inverse cosine distance weighted voting function for text classification. Therefore, Accuracy of text classification is increased even if any large value for k is chosen, as compared to simple k Nearest Neighbor classifier. The proposed weighted function is proved as more effective when any application has large text dataset with some dominating categories, using experimental results. KeywordsText Classification, kNearest Neighbor Weighted voting, Dominating class.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom