z-logo
open-access-imgOpen Access
Deepcomics
Author(s) -
Kévin Bannier,
Eakta Jain,
Olivier Le Meur
Publication year - 2018
Publication title -
proceedings of the 2018 acm symposium on eye tracking research andamp; applications
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/3204493.3204560
Subject(s) - eye tracking , computer science , comics , software deployment , gaze , artificial intelligence , deep learning , key (lock) , computer vision , tracking (education) , psychology , computer security , pedagogy , operating system
A key requirement for training deep learning saliency models is large training eye tracking datasets. Despite the fact that the accessibility of eye tracking technology has greatly increased, collecting eye tracking data on a large scale for very specific content types is cumbersome, such as comic images, which are different from natural images such as photographs because text and pictorial content is integrated. In this paper, we show that a deep network trained on visual categories where the gaze deployment is similar to comics outperforms existing models and models trained with visual categories for which the gaze deployment is dramatically different from comics. Further, we find that it is better to use a computationally generated dataset on visual category close to comics one than real eye tracking data of a visual category that has different gaze deployment. These findings hold implications for the transference of deep networks to different domains.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom