z-logo
open-access-imgOpen Access
Mitigation Techniques to Overcome Data Harm in Model Building for ML
Author(s) -
Ayşe Arslan
Publication year - 2022
Publication title -
international journal of artificial intelligence and applications
Language(s) - English
Resource type - Journals
eISSN - 0976-2191
pISSN - 0975-900X
DOI - 10.5121/ijaia.2022.13105
Subject(s) - harm , computer science , software deployment , pipeline (software) , data collection , data science , downstream (manufacturing) , risk analysis (engineering) , computer security , artificial intelligence , sociology , operations management , business , software engineering , economics , psychology , social science , social psychology , programming language
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the importance of choices throughout distinct phases of data collection, development, and deployment that extend far beyond just model training. Relevant mitigation techniques are also suggested for being used instead of merely relying on generic notions of what counts as fairness.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here