
An Empirical Model For Validity And Verification Of Ai Behavior: Overcoming Ai Hazards In Neural Networks
Author(s) -
Ayse Kok Arslan
Publication year - 2021
Publication title -
international journal of computer and technology
Language(s) - English
Resource type - Journals
ISSN - 2277-3061
DOI - 10.24297/ijct.v21i.9009
Subject(s) - artificial intelligence , artificial neural network , computer science , focus (optics) , relevance (law) , unintended consequences , machine learning , applications of artificial intelligence , data science , political science , physics , law , optics
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. This paper discusses hazards in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems with a particular focus on ANN. The paper provides a review of previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems with a focus on neural networks. Finally, the paper considers the high-level question of how to think most productively about the safety of forward-looking applications of AI.