z-logo
open-access-imgOpen Access
Avoiding Negative Side Effects Due to Incomplete Knowledge of AI Systems
Author(s) -
Sandhya Saisubramanian,
Shlomo Zilberstein,
Ece Kamar
Publication year - 2022
Publication title -
the ai magazine/ai magazine
Language(s) - English
Resource type - Journals
eISSN - 2371-9621
pISSN - 0738-4602
DOI - 10.1609/aimag.v42i4.7390
Subject(s) - software deployment , computer science , key (lock) , fidelity , risk analysis (engineering) , reliability (semiconductor) , management science , artificial intelligence , data science , engineering , computer security , software engineering , business , telecommunications , power (physics) , physics , quantum mechanics
Autonomous agents acting in the real-world often operate based on models that ignore certain aspects of the environment. The incompleteness of any given model – handcrafted or machine acquired – is inevitable due to practical limitations of any modeling technique for complex real-world settings. Due to the limited fidelity of its model, an agent’s actions may have unexpected, undesirable consequences during execution. Learning to recognize and avoid such negative side effects (NSEs) of an agent’s actions is critical to improve the safety and reliability of autonomous systems. Mitigating NSEs is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems and their broad societal impacts. This article provides a comprehensive overview of different forms of NSEs and the recent research efforts to address them. We identify key characteristics of NSEs, highlight the challenges in avoiding NSEs, and discuss recently developed approaches, contrasting their benefits and limitations. The article concludes with a discussion of open questions and suggestions for future research directions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here