z-logo
Premium
The risk of algorithm transparency: How algorithm complexity drives the effects on the use of advice
Author(s) -
Lehmann Cedric A.,
Haubitz Christiane B.,
Fügener Andreas,
Thonemann Ulrich W.
Publication year - 2022
Publication title -
production and operations management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.279
H-Index - 110
eISSN - 1937-5956
pISSN - 1059-1478
DOI - 10.1111/poms.13770
Subject(s) - transparency (behavior) , advice (programming) , computer science , perception , algorithm , set (abstract data type) , machine learning , psychology , computer security , neuroscience , programming language
Although algorithmic decision support is omnipresent in many managerial tasks, a lack of algorithm transparency is often stated as a barrier to successful human–machine collaboration. In this paper, we analyze the effects of algorithm transparency on the use of advice from algorithms with different degrees of complexity. We conduct a set of laboratory experiments in which participants receive identical advice from algorithms with different levels of transparency and complexity. Our results indicate that not the algorithm itself, but the individually perceived appropriateness of algorithmic complexity moderates the effects of transparency on the use of advice. We summarize this effect as a plateau curve: While perceiving an algorithm as too simple severely harms the use of its advice, the perception of an algorithm as being too complex has no significant effect. Our insights suggest that managers do not have to be concerned about revealing algorithms that are perceived to be appropriately complex or too complex to decision‐makers, even if the decision‐makers do not fully comprehend them. However, providing transparency on algorithms that are perceived to be simpler than appropriate could disappoint people's expectations and thereby reduce the use of their advice.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here