z-logo
open-access-imgOpen Access
Consideration of artificial intelligence through the prism of legal personality
Author(s) -
А. Х. Рустамзаде,
И.М. Алиев
Publication year - 2021
Publication title -
naukovij vìsnik užgorodsʹkogo nacìonalʹnogo unìversitetu. serìâ pravo
Language(s) - English
Resource type - Journals
eISSN - 2664-6153
pISSN - 2307-3322
DOI - 10.24144/2307-3322.2021.63.28
Subject(s) - normative , impossibility , subject (documents) , liability , personality , social responsibility , artificial intelligence , psychology , computer science , political science , law , social psychology , library science
The article notes that today the global problem is the almost complete absence of normative legal regulation of the functioning and activities of artificial intelligence and standardization in this area should be implemented at the global level. However, the world community is just beginning to realize the real and potential nuances of the influence of fully automated systems on vital areas of social relations, on the growth of ethical, social and legal problems associated with this trend. The authors poses the question of who will directly be responsible for the wrong decision implemented in life proposed by “artificial intelligence” and various options for answering it are proposed. Only a conscious subject can be the subject of responsibility, and since weak systems do not have autonomy, on them, i.e. artificial intelligence cannot be blamed. Measures of legal liability are simply not applicable to it, for example, the elementary impossibility of artificial intelligence to recognize the consequences of its harmful actions. In conclusion, it is proved that with all its development and the speed of information processing, many times exceeding even the potential capabilities of a person, artificial intelligence remains a program with material and technical support tied to it. Only a person is responsible for the actions of mechanisms, is tested for strength. As for the direct responsibility of artificial intelligence, in the current legal and social conditions, the question of its hypothetical responsibility is of a dead-end nature, since the measures of legal responsibility are simply inapplicable to it, for example, it is elementary for artificial intelligence to realize the consequences of its harmful actions. Even if artificial intelligence can simulate human intelligence, it will not be self-aware, and therefore artificial intelligence can in no way claim any special fundamental rights

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here