
Algorithmic fog of war: When lack of transparency violates the law of armed conflict
Author(s) -
Jonathan Kwik,
Tom van Engers
Publication year - 2021
Publication title -
journal of future robot life
Language(s) - English
Resource type - Journals
eISSN - 2589-9961
pISSN - 2589-9953
DOI - 10.3233/frl-200019
Subject(s) - transparency (behavior) , software deployment , incentive , international humanitarian law , computer security , law , business , law and economics , computer science , international law , political science , economics , microeconomics , operating system
Under international law, weapon capabilities and their use are regulated by legal requirements set by International Humanitarian Law (IHL). Currently, there are strong military incentives to equip capabilities with increasingly advanced artificial intelligence (AI), which include opaque (less transparent) models. As opaque models sacrifice transparency for performance, it is necessary to examine whether their use remains in conformity with IHL obligations. First, we demonstrate that the incentives for automation drive AI toward complex task areas and dynamic and unstructured environments, which in turn necessitates resort to more opaque solutions. We subsequently discuss the ramifications of opaque models for foreseeability and explainability. Then, we analyse their impact on IHL requirements from a development, pre-deployment and post-deployment perspective. We find that while IHL does not regulate opaque AI directly, the lack of foreseeability and explainability frustrates the fulfilment of key IHL requirements to the extent that the use of fully opaque AI could violate international law. States are urged to implement interpretability during development and seriously consider the challenging complication of determining the appropriate balance between transparency and performance in their capabilities.