Research Library

open-access-imgOpen AccessTo Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods
Author(s)
Dawen Zhang,
Shidong Pan,
Thong Hoang,
Zhenchang Xing,
Mark Staples,
Xiwei Xu,
Lina Yao,
Qinghua Lu,
Liming Zhu
Publication year2024
The right to be forgotten (RTBF) is motivated by the desire of people not tobe perpetually disadvantaged by their past deeds. For this, data deletion needsto be deep and permanent, and should be removed from machine learning models.Researchers have proposed machine unlearning algorithms which aim to erasespecific data from trained models more efficiently. However, these methodsmodify how data is fed into the model and how training is done, which maysubsequently compromise AI ethics from the fairness perspective. To helpsoftware engineers make responsible decisions when adopting these unlearningmethods, we present the first study on machine unlearning methods to revealtheir fairness implications. We designed and conducted experiments on twotypical machine unlearning methods (SISA and AmnesiacML) along with aretraining method (ORTR) as baseline using three fairness datasets under threedifferent deletion strategies. Experimental results show that under non-uniformdata deletion, SISA leads to better fairness compared with ORTR and AmnesiacML,while initial training and uniform data deletion do not necessarily affect thefairness of all three methods. These findings have exposed an importantresearch problem in software engineering, and can help practitioners betterunderstand the potential trade-offs on fairness when considering solutions forRTBF.
Language(s)English
DOI10.1007/s43681-023-00398-y

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here