z-logo
open-access-imgOpen Access
Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost
Author(s) -
Qingjie Zhao,
Christof Koch
Publication year - 2012
Publication title -
journal of vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.126
H-Index - 113
ISSN - 1534-7362
DOI - 10.1167/12.6.22
Subject(s) - dnmt1 , cell growth , transdifferentiation , protein kinase b , cancer research , microbiology and biotechnology , biology , signal transduction , chemistry , in vitro , dna methylation , biochemistry , gene expression , gene
To predict where subjects look under natural viewing conditions, biologically inspired saliency models decompose visual input into a set of feature maps across spatial scales. The output of these feature maps are summed to yield the final saliency map. We studied the integration of bottom-up feature maps across multiple spatial scales by using eye movement data from four recent eye tracking datasets. We use AdaBoost as the central computational module that takes into account feature selection, thresholding, weight assignment, and integration in a principled and nonlinear learning framework. By combining the output of feature maps via a series of nonlinear classifiers, the new model consistently predicts eye movements better than any of its competitors.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom