z-logo
open-access-imgOpen Access
Whom to Trust, How and Why: Untangling Artificial Intelligence Ethics Principles, Trustworthiness, and Trust
Author(s) -
Andreas Duenser,
David M. Douglas
Publication year - 2023
Publication title -
ieee intelligent systems
Language(s) - English
Resource type - Journals
eISSN - 1941-1294
pISSN - 1541-1672
DOI - 10.1109/mis.2023.3322586
Subject(s) - computing and processing , signal processing and analysis , communication, networking and broadcast technologies , components, circuits, devices and systems
In this article, we present an overview of the literature on trust in artificial intelligence (AI) and AI trustworthiness and argue for distinguishing these concepts more clearly and gathering more empirically evidence on what contributes to people’s trusting behaviors. We discuss that trust in AI involves not only reliance on the system itself but also trust in the system’s developers. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system’s trustworthiness is not as abundant or not that clear. AI systems should be recognized as sociotechnical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognizing these nuances, “trust in AI” and “trustworthy AI” risk becoming nebulous terms for any desirable feature for AI systems.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here