Issues in benchmarking human reliability analysis methods : a literature review.
Author(s) -
Erasmia Lois,
John A. Forester,
Tuan Q. Tran,
Stacey M. L. Hendrickson,
Ronald L. Boring
Publication year - 2008
Publication title -
osti oai (u.s. department of energy office of scientific and technical information)
Language(s) - English
Resource type - Reports
DOI - 10.2172/974398
Subject(s) - benchmarking , reliability (semiconductor) , human reliability , scope (computer science) , computer science , risk analysis (engineering) , empirical research , probabilistic logic , management science , reliability engineering , engineering , human error , artificial intelligence , business , statistics , mathematics , power (physics) , physics , quantum mechanics , marketing , programming language
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom