Premium
Peer Assessment of Aviation Performance: Inconsistent for Good Reasons
Author(s) -
Roth WolffMichael,
Mavin Timothy J.
Publication year - 2015
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/cogs.12152
Subject(s) - aviation , affect (linguistics) , psychology , variance (accounting) , reliability (semiconductor) , cognition , cognitive psychology , applied psychology , computer science , social psychology , engineering , power (physics) , physics , accounting , communication , quantum mechanics , neuroscience , business , aerospace engineering
Research into expertise is relatively common in cognitive science concerning expertise existing across many domains. However, much less research has examined how experts within the same domain assess the performance of their peer experts. We report the results of a modified think‐aloud study conducted with 18 pilots (6 first officers, 6 captains, and 6 flight examiners). Pairs of same‐ranked pilots were asked to rate the performance of a captain flying in a critical pre‐recorded simulator scenario. Findings reveal (a) considerable variance within performance categories, (b) differences in the process used as evidence in support of a performance rating, (c) different numbers and types of facts (cues) identified, and (d) differences in how specific performance events affect choice of performance category and gravity of performance assessment. Such variance is consistent with low inter‐rater reliability. Because raters exhibited good, albeit imprecise, reasons and facts, a fuzzy mathematical model of performance rating was developed. The model provides good agreement with observed variations.