Premium
Data‐driven Evaluation of Visual Quality Measures
Author(s) -
Sedlmair M.,
Aupetit M.
Publication year - 2015
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.12632
Subject(s) - computer science , ground truth , outlier , artificial intelligence , class (philosophy) , measure (data warehouse) , set (abstract data type) , data set , visualization , quality (philosophy) , data mining , machine learning , pattern recognition (psychology) , philosophy , epistemology , programming language
Visual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data‐driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human “ground truth” judgements, and to use this human‐labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance—an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state‐of‐the‐art class separation measures, using human ground truth data from 828 class separation judgments on color‐coded 2D scatterplots.