CaltechAUTHORS
  A Caltech Library Service

Faces and text attract gaze independent of the task: Experimental data and computer model

Cerf, Moran and Frady, E. Paxon and Koch, Christof (2009) Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision, 9 (12). pp. 1-15. ISSN 1534-7362. http://resolver.caltech.edu/CaltechAUTHORS:20130816-103355264

[img]
Preview
PDF - Published Version
See Usage Policy.

997Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20130816-103355264

Abstract

Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones - as a suitable control - attract attention by tracking the eye movements of subjects in two types of tasks - free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom–up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced model’s predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map.


Item Type:Article
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1167/9.12.10 DOIArticle
http://www.journalofvision.org/content/9/12/10.abstractPublisherArticle
ORCID:
AuthorORCID
Koch, Christof0000-0001-6482-8067
Additional Information:© 2009 ARVO. Received November 10, 2008; published November 18, 2009. This research was supported by the National Institute for Mental Health and the Mathers Foundation. The authors wish to thank Kelsey Laird for valuable comments.
Group:Koch Laboratory, KLAB
Funders:
Funding AgencyGrant Number
National Institute of Mental Health (NIMH)UNSPECIFIED
G. Harold and Leila Y. Mathers Charitable FoundationUNSPECIFIED
Subject Keywords:attention, eye tracking, faces, text, natural scenes, saliency model, human psychophysics
Record Number:CaltechAUTHORS:20130816-103355264
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20130816-103355264
Official Citation:Cerf, M., Frady, E. P., & Koch, C. (2009). Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision, 9(12):10, 1–15, http://journalofvision.org/9/12/10/, doi:10.1167/9.12.10.
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:40670
Collection:CaltechAUTHORS
Deposited By: KLAB Import
Deposited On:14 Apr 2010 03:33
Last Modified:13 Mar 2017 22:52

Repository Staff Only: item control page