z-logo
open-access-imgOpen Access
Strategic Information Disclosure to People with Multiple Alternatives
Author(s) -
Amos Azaria,
Zinovi Rabinovich,
Claudia V. Goldman,
Sarit Kraus
Publication year - 2014
Publication title -
acm transactions on intelligent systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.914
H-Index - 63
eISSN - 2157-6912
pISSN - 2157-6904
DOI - 10.1145/2558397
Subject(s) - communication source , computer science , persuasion , parameterized complexity , signaling game , perfect information , routing (electronic design automation) , key (lock) , action (physics) , artificial intelligence , machine learning , human–computer interaction , theoretical computer science , computer security , algorithm , computer network , mathematical economics , philosophy , linguistics , physics , mathematics , quantum mechanics
In this article, we study automated agents that are designed to encourage humans to take some actions over others by strategically disclosing key pieces of information. To this end, we utilize the framework of persuasion games-a branch of game theory that deals with asymmetric interactions where one player (Sender) possesses more information about the world, but it is only the other player (Receiver) who can take an action. In particular, we use an extended persuasion model, where the Sender's information is imperfect and the Receiver has more than two alternative actions available. We design a computational algorithm that, from the Sender's standpoint, calculates the optimal information disclosure rule. The algorithm is parameterized by the Receiver's decision model (i.e., what choice he will make based on the information disclosed by the Sender) and can be retuned accordingly. We then provide an extensive experimental study of the algorithm's performance in interactions with human Receivers. First, we consider a fully rational (in the Bayesian sense) Receiver decision model and experimentally show the efficacy of the resulting Sender's solution in a routing domain. Despite the discrepancy in the Sender's and the Receiver's utilities from each of the Receiver's choices, our Sender agent successfully persuaded human Receivers to select an option more beneficial for the agent. Dropping the Receiver's rationality assumption, we introduce a machine learning procedure that generates a more realistic human Receiver model. We then show its significant benefit to the Sender solution by repeating our routing experiment. To complete our study, we introduce a second (supply-demand) experimental domain and, by contrasting it with the routing domain, obtain general guidelines for a Sender on how to construct a Receiver model.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom