
“Smart” Psychological Operations in Social Media: Security Challenges in China and Germany
Author(s) -
D. Yu. Bazarkina,
Darya Matyashova
Publication year - 2022
Publication title -
european conference on social media
Language(s) - English
Resource type - Journals
ISSN - 2055-7221
DOI - 10.34190/ecsm.9.1.174
Subject(s) - variety (cybernetics) , pace , reputation , terrorism , organised crime , social media , politics , public relations , state (computer science) , computer security , internet privacy , civil society , business , political science , law , computer science , artificial intelligence , geodesy , algorithm , geography
Artificial intelligence (AI) is actively being incorporated into the communication process, as AI rapidly spreads and becomes cheaper for companies and other actors to use. AI has traditionally been used to run social media. It is used in the various platforms’ algorithms, bots and deepfake technology, as well as for the purpose of monitoring content and targeting instruments. However, a variety of actors are now increasingly using AI technology, at times with malicious intent. For example, terrorist organizations use bots on social networks to spread their propaganda and recruit new fighters. The rise of crimes involving AI is growing at a rapid pace. The impact of this type of crime is extremely negative – mass protests which demand the restriction of the use of technology, the involvement of manipulated persons in criminal groups, the destruction of the reputation of victims of “smart” slander (sometimes leading to threats to their life and health), etc. Combating these phenomena is a task which falls to security agencies, but also civil society institutions, the academic community, legislators, politicians, and the business community, since the complex nature of the threat requires complex solutions involving the participation of all interested parties. This paper aims to find answers to the following research questions: 1) what are the current threats to the psychological security of society caused by the malicious use of AI on social networks? 2) how do malicious (primarily non-state) actors carry out psychological operations through AI on social networks? 3) what impacts (behavioral, political, etc.) do such operations have on society? 4) how can the psychological security of society be protected using existing approaches as well as innovative ones? The answer to this last question is inextricably linked to the possibilities offered by international cooperation. This paper examines the experiences of Germany and China, two leaders in the field of AI which happen to have different socio-political systems and approaches to a number of international issues. The paper concludes that by increasing international cooperation, it is possible to counter psychological operations through AI more effectively and thereby protect society’s interests.