CaltechAUTHORS
  A Caltech Library Service

Learning a saliency map using fixated locations in natural scenes

Zhao, Qi and Koch, Christof (2011) Learning a saliency map using fixated locations in natural scenes. Journal of Vision, 11 (3). Art. No. 9. ISSN 1534-7362. http://resolver.caltech.edu/CaltechAUTHORS:20110422-145234601

[img]
Preview
PDF - Published Version
See Usage Policy.

3008Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20110422-145234601

Abstract

Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales in a number of pre-specified channels. The outputs of these feature maps are summed to yield the final saliency map. Here we use a least square technique to learn the weights associated with these maps from subjects freely fixating natural scenes drawn from four recent eye-tracking data sets. Depending on the data set, the weights can be quite different, with the face and orientation channels usually more important than color and intensity channels. Inter-subject differences are negligible. We also model a bias toward fixating at the center of images and consider both time-varying and constant factors that contribute to this bias. To compensate for the inadequacy of the standard method to judge performance (area under the ROC curve), we use two other metrics to comprehensively assess performance. Although our model retains the basic structure of the standard saliency model, it outperforms several state-of-the-art saliency algorithms. Furthermore, the simple structure makes the results applicable to numerous studies in psychophysics and physiology and leads to an extremely easy implementation for real-world applications.


Item Type:Article
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1167/11.3.9DOIArticle
http://www.journalofvision.org/content/11/3/9PublisherArticle
ORCID:
AuthorORCID
Koch, Christof0000-0001-6482-8067
Additional Information:© 2011 ARVO. Received September 23, 2010; published March 10, 2011. This research was supported by the NeoVision Program at DARPA, the ONR, the Mathers Foundation, and the WCU (World Class University) Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (R31-10008).
Group:Koch Laboratory, KLAB
Funders:
Funding AgencyGrant Number
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
Office of Naval Research (ONR)UNSPECIFIED
G. Harold and Leila Y. Mathers Charitable FoundationUNSPECIFIED
Ministry of Education, Science and Technology (Korea)R31-10008
Subject Keywords:computational saliency model; feature combination; center bias; inter-subject variability; metric
Record Number:CaltechAUTHORS:20110422-145234601
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20110422-145234601
Official Citation:Zhao, Q., & Koch, C. (2011). Learning a saliency map using fixated locations in natural scenes. Journal of Vision, 11(3):9, 1–15, http://www.journalofvision.org/content/11/3/9, doi:10.1167/11.3.9.
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:23430
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:22 Apr 2011 23:04
Last Modified:13 Mar 2017 22:40

Repository Staff Only: item control page