z-logo
open-access-imgOpen Access
Internal Security Issues Related to Automatic System Malfunction and a Model to Explain Foresight of Experts and Non-Experts
Author(s) -
Soichiro Morishita,
Hiroshi Yokoi
Publication year - 2011
Publication title -
journal of disaster research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.332
H-Index - 18
eISSN - 1883-8030
pISSN - 1881-2473
DOI - 10.20965/jdr.2011.p0498
Subject(s) - computer science , credence , relation (database) , risk analysis (engineering) , futures studies , perspective (graphical) , artificial intelligence , data mining , machine learning , business
Accidents or malfunctions in automatic systems often raise questions about the possibility of the system’s designer being able to foresee such problems. In general, the opinions of experts are given more credence than the opinions of non-experts. If objective evidence shows that a malfunction could not have been foreseen even by experts, the possibility of prediction is assumed to have not been possible. Experts can make a proper decision based on expertise related to the automatic machine coverage. However, non-experts might underestimate the coverage and become careful about handling of the system. When a malfunction that an expert cannot foresee occurs in such a situation, and results agree by chance with the forecast of a non-expert, engineers are questioned beyond reason about their “responsibility” – a trend particularly marked in relation to computer systems. As described in this paper, the case in which an Okazaki City Library user was arrested is an appropriate case study for this problem. Given the perspective of design of automatic machines and engineering ethics, we discuss it as an internal security issue.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here