
Governance, Risk, and Artificial Intelligence
Author(s) -
Mannes Aaron
Publication year - 2020
Publication title -
ai magazine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.597
H-Index - 79
eISSN - 2371-9621
pISSN - 0738-4602
DOI - 10.1609/aimag.v41i1.5200
Subject(s) - dignity , embodied cognition , risk analysis (engineering) , corporate governance , robot , risk governance , artificial intelligence , applications of artificial intelligence , computer science , computer security , engineering , business , political science , law , finance
Artificial intelligence, whether embodied as robots or Internet of Things, or disembodied as intelligent agents or decision‐support systems, can enrich the human experience. It will also fail and cause harms, including physical injury and financial loss as well as more subtle harms such as instantiating human bias or undermining individual dignity. These failures could have a disproportionate impact because strange, new, and unpredictable dangers may lead to public discomfort and rejection of artificial intelligence. Two possible approaches to mitigating these risks are the hard power of regulating artificial intelligence, to ensure it is safe, and the soft power of risk communication, which engages the public and builds trust. These approaches are complementary and both should be implemented as artificial intelligence becomes increasingly prevalent in daily life.