Learning to Detect Objects from Eye-Tracking Data
Author(s) -
D.P Papadopoulous,
Alasdair D. F. Clarke,
Frank Keller,
Vittorio Ferrari
Publication year - 2014
Publication title -
i-perception
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.64
H-Index - 26
ISSN - 2041-6695
DOI - 10.1068/ii57
Subject(s) - artificial intelligence , computer science , computer vision , pascal (unit) , eye tracking , fixation (population genetics) , bittorrent tracker , population , demography , sociology , programming language
One of the bottlenecks in computer vision, especially in object detection, is the need for a large amount of training data. Typically, this is acquired by manually annotating images by hand. In this study, we explore the possibility of using eye-trackers to provide training data for supervised machine learning. We have created a new large scale eye-tracking dataset, collecting fixation data for 6270 images from the Pascal VOC 2012 database. This represents 10 of the 20 classes included in the Pascal database. Each image was viewed by 5 observers, and a total of over 178k fixations have been collected. While previous attempts at using fixation data in computer vision were based on a free-viewing paradigm, we used a visual search task in order to increase the proportion of fixations on the target object. Furthermore, we divided the dataset into five pairs of semantically similar classes (cat/dog, bicycle/motorbike, horse/cow, boat/aeroplane and sofa/diningtable), with the observer having to decide which class each image belonged to. This kept the observer's task simple, while decreasing the chance of them using the scene gist to identify the target parafoveally. In order to alleviate the central bias in scene viewing, the images were presented to the observers with a random offset. The goal of our project is to use the eye-tracking information in order to detect and localise the attended objects. Our model so far, based on features representing the location of the fixations and an appearance model of the attended regions, can successfully predict the location of the target objects in over half of images
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom