NON-HUMAN HUMANITARIANISM: WHEN AI FOR GOOD TURNS OUT TO BE BAD
Author(s) -
Mirca Madianou
Publication year - 2020
Publication title -
aoir selected papers of internet research
Language(s) - English
Resource type - Journals
ISSN - 2162-3317
DOI - 10.5210/spir.v2020i0.11267
Subject(s) - sociotechnical system , politics , sociology , judgement , human enhancement , digital revolution , humanitarian aid , political science , environmental ethics , epistemology , computer science , artificial intelligence , law , philosophy
With over 168 people needing humanitarian assistance in 2018 and over 69 million refugees, the humanitarian sector is facing significant challenges. Proposals that artificial intelligence (AI) applications can be a potential solution for the crises of humanitarianism have been met with much enthusiasm. This is part of the broad trend of ‘AI for social good’ as well as the wider developments in ‘digital humanitarianism’, which refers here to the uses of digital innovation and data within public and private sectors in response to humanitarian emergencies. Chatbots; predictive analytics and modeling that claims to forecast future epidemics or population flows; and biometric technologies, which rely on advanced neural networks which employ machine learning algorithms, are some of the examples which are becoming increasingly popular in aid operations.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom