z-logo
open-access-imgOpen Access
DEFINING AND DETECTING FAIRNESS BIAS FOR BINARY CLASSIFICATION PROBLEM IN FINANCIAL ANALYSIS
Author(s) -
GEVORG GHALACHYAN
Publication year - 2021
Publication title -
gitakan arts'akh
Language(s) - English
Resource type - Journals
ISSN - 2738-2672
DOI - 10.52063/25792652-2021.2-183
Subject(s) - metric (unit) , computer science , observational study , binary number , binary classification , legislature , replication (statistics) , econometrics , similarity (geometry) , statistics , artificial intelligence , machine learning , mathematics , economics , political science , operations management , arithmetic , support vector machine , law , image (mathematics)
This article aims to present the fairness bias in the models of artificial intelligence. First, it introduces use cases and legislative constrains of automated decision making towards sensitive features. Then using academic datasets, the historical human bias, measures of dataset fairness, and the effective way of choosing the respective metric are presented. And last, different AI models are estimated to show the replication of decision bias from data to models. The design of the research is observational; academical datasets have been used. For the quantitative analysis both descriptive and inferential statistics are applied.The analysis was done for the problem of binary classification mainly focusing on the decision making in finance. The phenomenon of unequal decisions aimed at unprivileged demographic groups was shown and quantified, stating the example given with averaging 8-20% bias between groups, which was also present in even most accurate models – 85% and 90% AUC score.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here