Premium
Determining Differences in Expert Judgment: Implications for Knowledge Acquisition and Validation *
Author(s) -
O'Leary Daniel E.
Publication year - 1993
Publication title -
decision sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.238
H-Index - 108
eISSN - 1540-5915
pISSN - 0011-7315
DOI - 10.1111/j.1540-5915.1993.tb00480.x
Subject(s) - computer science , quality (philosophy) , process (computing) , aggregate (composite) , measure (data warehouse) , expert system , subject matter expert , knowledge management , data science , artificial intelligence , data mining , philosophy , materials science , epistemology , composite material , operating system
In knowledge acquisition, it is often desirable to aggregate the judgments of multiple experts into a single system. In some cases this takes the form of averaging the judgments of those experts. In these situations it is desirable to determine if the experts have different views of the world before their individual judgments are aggregated. In validation, multiple experts often are employed to compare the performance of expert systems and other human actors. Often those judgments are then averaged to establish performance quality of the expert system. An important part of the comparison process should be determining if the experts have a similar view of the world. If the experts do not have similar views, their evaluations of performance may differ, resulting in a meaningless average performance measure. Alternatively, if all the validating experts do have similar views of the world then the validation process may result in paradigm myopia.