Cerf, Moran and Frady, E. Paxon and Koch, Christof (2009) Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision, 9 (12). pp. 1-15. ISSN 1534-7362. http://resolver.caltech.edu/CaltechAUTHORS:20130816-103355264
- Published Version
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20130816-103355264
Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones - as a suitable control - attract attention by tracking the eye movements of subjects in two types of tasks - free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difﬁcult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We reﬁne a well-known bottom–up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this signiﬁcantly improves the ability to predict eye ﬁxations in natural images. Our enhanced model’s predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual ﬁxation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map.
|Additional Information:||© 2009 ARVO. Received November 10, 2008; published November 18, 2009. This research was supported by the National Institute for Mental Health and the Mathers Foundation. The authors wish to thank Kelsey Laird for valuable comments.|
|Group:||Koch Laboratory, KLAB|
|Subject Keywords:||attention, eye tracking, faces, text, natural scenes, saliency model, human psychophysics|
|Official Citation:||Cerf, M., Frady, E. P., & Koch, C. (2009). Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision, 9(12):10, 1–15, http://journalofvision.org/9/12/10/, doi:10.1167/9.12.10.|
|Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||KLAB Import|
|Deposited On:||14 Apr 2010 03:33|
|Last Modified:||13 Mar 2017 22:52|
Repository Staff Only: item control page