
Conservatism predicts aversion to consequential Artificial Intelligence
Author(s) -
Noah Castelo,
Adrian F. Ward
Publication year - 2021
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0261467
Subject(s) - conservatism , risk aversion (psychology) , cognitive reframing , perception , intervention (counseling) , preference , psychology , test (biology) , skepticism , cognitive psychology , function (biology) , social psychology , loss aversion , politics , economics , expected utility hypothesis , political science , microeconomics , epistemology , law , paleontology , philosophy , mathematical economics , neuroscience , psychiatry , evolutionary biology , biology
Artificial intelligence (AI) has the potential to revolutionize society by automating tasks as diverse as driving cars, diagnosing diseases, and providing legal advice. The degree to which AI can improve outcomes in these and other domains depends on how comfortable people are trusting AI for these tasks, which in turn depends on lay perceptions of AI. The present research examines how these critical lay perceptions may vary as a function of conservatism. Using five survey experiments, we find that political conservatism is associated with low comfort with and trust in AI—i.e., with AI aversion. This relationship between conservatism and AI aversion is explained by the link between conservatism and risk perception; more conservative individuals perceive AI as being riskier and are therefore more averse to its adoption. Finally, we test whether a moral reframing intervention can reduce AI aversion among conservatives.