Open Access
Assessing Fair Machine Learning Strategies Through a Fairness-Utility Trade-off Metric
Author(s) -
Luiz Fernando F. P. de Lima,
Danielle Rousy Dias Ricarte,
Clauirton Siebra
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5753/eniac.2021.18288
Subject(s) - metric (unit) , adversarial system , computer science , baseline (sea) , implementation , machine learning , artificial intelligence , task (project management) , performance metric , encoding (memory) , software engineering , law , economics , operations management , management , political science
Due to the increasing use of artificial intelligence for decision making and the observation of biased decisions in many applications, researchers are investigating solutions that attempt to build fairer models that do not reproduce discrimination. Some of the explored strategies are based on adversarial learning to achieve fairness in machine learning by encoding fairness constraints through an adversarial model. Moreover, it is usual for each proposal to assess its model with a specific metric, making comparing current approaches a complex task. In that sense, we defined a utility and fairness trade-off metric. We assessed 15 fair model implementations and a baseline model using this metric, providing a systemically comparative ruler for other approaches.