z-logo
Premium
Who Is a Better Decision Maker? Data‐Driven Expert Ranking Under Unobserved Quality
Author(s) -
Geva Tomer,
SaarTsechansky Maytal
Publication year - 2021
Publication title -
production and operations management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.279
H-Index - 110
eISSN - 1937-5956
pISSN - 1059-1478
DOI - 10.1111/poms.13260
Subject(s) - computer science , ranking (information retrieval) , quality (philosophy) , benchmark (surveying) , decision quality , data quality , learning to rank , rank (graph theory) , task (project management) , business decision mapping , data science , machine learning , knowledge management , decision support system , data mining , marketing , business , metric (unit) , team effectiveness , economics , philosophy , mathematics , epistemology , geodesy , management , combinatorics , geography
The capacity to rank expert workers by their decision quality is a key managerial task of substantial significance to business operations. However, when no ground truth information is available on experts’ decisions, the evaluation of expert workers typically requires enlisting peer‐experts, and this form of evaluation is prohibitively costly in many important settings. In this work, we develop a data‐driven approach for producing effective rankings based on the decision quality of expert workers; our approach leverages historical data on past decisions, which are commonly available in organizational information systems. Specifically, we first formulate a new business data science problem: Ranking Expert decision makers’ unobserved decision Quality (REQ) using only historical decision data and excluding evaluation by peer experts. The REQ problem is challenging because the correct decisions in our settings are unknown (unobserved) and because some of the information used by decision makers might not be available for retrospective evaluation. To address the REQ problem, we develop a machine‐learning–based approach and analytically and empirically explore conditions under which our approach is advantageous. Our empirical results over diverse settings and datasets show that our method yields robust performance: Its rankings of expert workers are consistently either superior or at least comparable to those obtained by the best alternative approach. Accordingly, our method constitutes a de facto benchmark for future research on the REQ problem.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here