z-logo
Premium
A Leap of Faith: Is There a Formula for “Trustworthy” AI?
Author(s) -
Braun Matthias,
Bleher Hannah,
Hummel Patrik
Publication year - 2021
Publication title -
hastings center report
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.515
H-Index - 63
eISSN - 1552-146X
pISSN - 0093-0334
DOI - 10.1002/hast.1207
Subject(s) - distrust , deliberation , faith , leaps , sociology , trustworthiness , credibility , promotion (chess) , epistemology , power (physics) , public relations , engineering ethics , political science , law , psychology , social psychology , business , philosophy , politics , engineering , physics , finance , quantum mechanics
Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI that does not conceive of trust merely as an accelerator for societal acceptance of AI technologies. Instead, we argue, trust is granted through leaps of faith. For this reason, trust remains precarious, fragile, and resistant to promotion through formulaic approaches. We also highlight the significance of distrust in societal deliberation, as it is relevant to trust in various and intricate ways. Among the fruitful aspects of distrust is that it enables individuals to forgo technology if desired, to constrain its power, and to exercise meaningful human control .

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here