z-logo
open-access-imgOpen Access
Tversky Similarity based UnderSampling with Gaussian Kernelized Decision Stump Adaboost Algorithm for Imbalanced Medical Data Classification
Author(s) -
M. Kamaladevi,
Vishwesh Venkatraman
Publication year - 2021
Publication title -
international journal of computers, communications and control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.422
H-Index - 33
eISSN - 1841-9844
pISSN - 1841-9836
DOI - 10.15837/ijccc.2021.6.4291
Subject(s) - undersampling , pattern recognition (psychology) , artificial intelligence , oversampling , computer science , similarity (geometry) , adaboost , mathematics , decision tree , data mining , machine learning , classifier (uml) , computer network , bandwidth (computing) , image (mathematics)
In recent years, imbalanced data classification are utilized in several domains including, detecting fraudulent activities in banking sector, disease prediction in healthcare sector and so on. To solve the Imbalanced classification problem at data level, strategy such as undersampling or oversampling are widely used. Sampling technique pose a challenge of significant information loss. The proposed method involves two processes namely, undersampling and classification. First, undersampling is performed by means of Tversky Similarity Indexive Regression model. Here, regression along with the Tversky similarity index is used in analyzing the relationship between two instances from the dataset. Next, Gaussian Kernelized Decision stump AdaBoosting is used for classifying the instances into two classes. Here, the root node in the Decision Stump takes a decision on the basis of the Gaussian Kernel function, considering average of neighboring points accordingly the results is obtained at the leaf node. Weights are also adjusted to minimizing the training errors occurring during classification to find the best classifier. Experimental assessment is performed with two different imbalanced dataset (Pima Indian diabetes and Hepatitis dataset). Various performance metrics such as precision, recall, AUC under ROC score and F1-score are compared with the existing undersampling methods. Experimental results showed that prediction accuracy of minority class has improved and therefore minimizing false positive and false negative.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here