z-logo
open-access-imgOpen Access
Elimination and Backward Selection of Features (P-Value Technique) In Prediction of Heart Disease by Using Machine Learning Algorithms
Author(s) -
Saurabh Pal Ritu Aggrawal
Publication year - 2021
Publication title -
türk bilgisayar ve matematik eğitimi dergisi
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.218
H-Index - 3
ISSN - 1309-4653
DOI - 10.17762/turcomat.v12i6.5765
Subject(s) - machine learning , random forest , artificial intelligence , naive bayes classifier , decision tree , computer science , feature selection , confusion matrix , logistic regression , classifier (uml) , framingham risk score , gradient boosting , boosting (machine learning) , cross validation , support vector machine , algorithm , disease , medicine
Background: Early speculation of cardiovascular disease can help determine the lifestyle change options of high-risk patients, thereby reducing difficulties. We propose a coronary heart disease data set analysis technique to predict people’s risk of danger based on people’s clinically determined history. The methods introduced may be integrated into multiple uses, such for developing decision support system, developing a risk management network, and help for experts and clinical staff. Methods: We employed the Framingham Heart study dataset, which is publicly available Kaggle, to train several machine learning classifiers such as logistic regression (LR), K-nearest neighbor (KNN), Naïve Bayes (NB), decision tree (DT), random forest (RF) and gradient boosting classifier (GBC) for disease prediction. The p-value method has been used for feature elimination, and the selected features have been incorporated for further prediction. Various thresholds are used with different classifiers to make predictions. In order to estimating the precision of the classifiers, ROC curve, confusion matrix and AUC value are considered for model verification. The performance of the six classifiers is used for comparison to predict chronic heart disease (CHD). Results: After applying the p-value backward elimination statistical method on the 10-year CHD data set, 6 significant features were selected from 14 features with p <0.5. In the performance of machine learning classifiers, GBC has the highest accuracy score, which is 87.61%. Conclusions: Statistical methods, such as the combination of p-value backward elimination method and machine learning classifiers, thereby improving the accuracy of the classifier and shortening the running time of the machine.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here