Premium
7.1: Invited Paper: How to Quantify Vision‐for‐Recognition and Vision‐for‐Action for Distinct Display Form Factors
Author(s) -
Yang Shun-nan
Publication year - 2018
Publication title -
sid symposium digest of technical papers
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.351
H-Index - 44
eISSN - 2168-0159
pISSN - 0097-966X
DOI - 10.1002/sdtp.12639
Subject(s) - flicker , action (physics) , gaze , task (project management) , visual search , eye movement , computer vision , computer science , artificial intelligence , quality (philosophy) , gaze contingency paradigm , human visual system model , visual perception , psychology , image (mathematics) , perception , neuroscience , computer graphics (images) , philosophy , physics , epistemology , quantum mechanics , management , economics
Human vision is roughly composed of two distinct but integrated functional pathways: vision‐for‐recognition and vision‐for‐action. Likewise, display form factors can be categorized as passive viewing (e.g., office work on a computer or movie viewing on a TV) and active interaction (e.g., VR and AR in gaming). Theoretical and empirical findings suggest the encoding of visual inputs can affect by task demands. Here we evaluated two corresponding methods of measuring image quality for visual consumption. The method of measuring vision‐for‐recognition involves direct visual comparison to discern any degraded visual quality. Visual attention is critical in such paradigm. Conversely, a novel method of assessing visual quality for vision‐for‐action is to assess how visuomotor responses are facilitated or altered independent of conscious detection. We reported empirical data obtained with a gaze‐contingent flicker paradigm to measure the rate of detecting visual degradation and altered eye movements. These findings show that consciously invisible changes in visual image can alter viewing eye movements and conscious decision; conversely, the possibility of detecting visual degradation can be overestimated because of artificially manipulated attention. These findings call for a new approach to measure visual quality for different form factors and task requirements.