Premium
Perceived trust in artificial intelligence technologies: A preliminary study
Author(s) -
Bitkina Olga Vl.,
Jeong Heejin,
Lee Byung Cheol,
Park Jangwoon,
Park Jaehyun,
Kim Hyun K.
Publication year - 2020
Publication title -
human factors and ergonomics in manufacturing and service industries
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.408
H-Index - 39
eISSN - 1520-6564
pISSN - 1090-8471
DOI - 10.1002/hfm.20839
Subject(s) - perception , task (project management) , product (mathematics) , perspective (graphical) , computer science , knowledge management , emerging technologies , artificial intelligence , psychology , engineering , geometry , mathematics , systems engineering , neuroscience
Artificial intelligence (AI) is becoming increasingly prevalent in all spheres of society. Still, the perception of AI from users and customers remains the main barrier for its widespread adoption. Previous studies showed that the acceptance of new technologies in society depends on perceived characteristics. This study examined users’ perception of trust, the difficulty of the task, and application performance when using an AI‐based technology. These factors help us to elucidate the mechanisms for building trust in AI technology from the users’ perspective. A total of 18 participants took part in the experiment with the Google AutoDraw software as an AI tool. As a result, the difficulty of the task, perceived performance, and success/failure of the task can be regarded as the influential factors for the perceived trust evaluation. The perceived trust of users in new AI products would be increased by improving product performance and the successful implementation of the tasks. The obtained results and insights can serve AI product developers to increase the level of users’ trust and attraction towards their technologies and applications.