Premium
The Aggregation of Expert Judgment: Do Good Things Come to Those Who Weight?
Author(s) -
Bolger Fergus,
Rowe Gene
Publication year - 2015
Publication title -
risk analysis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.972
H-Index - 130
eISSN - 1539-6924
pISSN - 0272-4332
DOI - 10.1111/risa.12272
Subject(s) - weighting , popularity , argument (complex analysis) , raising (metalworking) , computer science , order (exchange) , expert elicitation , knowledge base , management science , toolbox , risk analysis (engineering) , artificial intelligence , psychology , mathematics , social psychology , engineering , statistics , business , medicine , biochemistry , chemistry , geometry , finance , radiology , programming language
Good policy making should be based on available scientific knowledge. Sometimes this knowledge is well established through research, but often scientists must simply express their judgment, and this is particularly so in risk scenarios that are characterized by high levels of uncertainty. Usually in such cases, the opinions of several experts will be sought in order to pool knowledge and reduce error, raising the question of whether individual expert judgments should be given different weights. We argue—against the commonly advocated “classical method”—that no significant benefits are likely to accrue from unequal weighting in mathematical aggregation. Our argument hinges on the difficulty of constructing reliable and valid measures of substantive expertise upon which to base weights. Practical problems associated with attempts to evaluate experts are also addressed. While our discussion focuses on one specific weighting scheme that is currently gaining in popularity for expert knowledge elicitation, our general thesis applies to externally imposed unequal weighting schemes more generally.