
Responsible application of artificial intelligence to surveillance: What prospects?1
Author(s) -
Roger Clarke
Publication year - 2022
Publication title -
information polity
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.582
H-Index - 35
eISSN - 1875-8754
pISSN - 1570-1255
DOI - 10.3233/ip-211532
Subject(s) - scope (computer science) , software deployment , harm , commission , european commission , risk analysis (engineering) , field (mathematics) , computer science , computer security , political science , artificial intelligence , engineering ethics , business , law , european union , engineering , mathematics , pure mathematics , economic policy , programming language , operating system
Artificial Intelligence (AI) is one of the most significant of the information and communications technologies being applied to surveillance. AI’s proponents argue that its promise is great, and that successes have been achieved, whereas its detractors draw attention to the many threats embodied in it, some of which are much more problematic than those arising from earlier data analytical tools. This article considers the full gamut of regulatory mechanisms. The scope extends from natural and infrastructural regulatory mechanisms, via self-regulation, including the recently-popular field of ‘ethical principles’, to co-regulatory and formal approaches. An evaluation is provided of the adequacy or otherwise of the world’s first proposal for formal regulation of AI practices and systems, by the European Commission. To lay the groundwork for the analysis, an overview is provided of the nature of AI. The conclusion reached is that, despite the threats inherent in the deployment of AI, the current safeguards are seriously inadequate, and the prospects for near-future improvement are far from good. To avoid undue harm from AI applications to surveillance, it is necessary to rapidly enhance existing, already-inadequate safeguards and establish additional protections.